Jan 31 03:46:45 crc systemd[1]: Starting Kubernetes Kubelet... Jan 31 03:46:46 crc restorecon[4737]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:46 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:46:47 crc restorecon[4737]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:46:47 crc restorecon[4737]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 31 03:46:47 crc kubenswrapper[4827]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 03:46:47 crc kubenswrapper[4827]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 31 03:46:47 crc kubenswrapper[4827]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 03:46:47 crc kubenswrapper[4827]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 03:46:47 crc kubenswrapper[4827]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 31 03:46:47 crc kubenswrapper[4827]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.829977 4827 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840333 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840379 4827 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840392 4827 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840402 4827 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840412 4827 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840425 4827 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840438 4827 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840449 4827 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840460 4827 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840470 4827 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840480 4827 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840490 4827 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840500 4827 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840510 4827 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840520 4827 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840530 4827 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840539 4827 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840549 4827 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840558 4827 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840568 4827 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840578 4827 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840589 4827 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840599 4827 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840609 4827 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840651 4827 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840662 4827 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840672 4827 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840682 4827 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840691 4827 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840701 4827 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840710 4827 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840723 4827 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840736 4827 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840746 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840757 4827 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840768 4827 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840779 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840791 4827 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840801 4827 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840812 4827 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840824 4827 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840835 4827 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840850 4827 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840864 4827 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840875 4827 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840927 4827 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840940 4827 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840954 4827 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840967 4827 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840977 4827 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.840989 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.841000 4827 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.841011 4827 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.841023 4827 feature_gate.go:330] unrecognized feature gate: Example Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.841033 4827 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.841045 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.841057 4827 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.841067 4827 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.841077 4827 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.841087 4827 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.841097 4827 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.841107 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.841117 4827 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.841134 4827 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.841147 4827 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.841157 4827 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.841169 4827 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.841181 4827 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.841193 4827 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.841205 4827 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.841215 4827 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841422 4827 flags.go:64] FLAG: --address="0.0.0.0" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841445 4827 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841467 4827 flags.go:64] FLAG: --anonymous-auth="true" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841483 4827 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841497 4827 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841508 4827 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841523 4827 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841536 4827 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841549 4827 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841561 4827 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841573 4827 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841588 4827 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841599 4827 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841611 4827 flags.go:64] FLAG: --cgroup-root="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841622 4827 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841634 4827 flags.go:64] FLAG: --client-ca-file="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841645 4827 flags.go:64] FLAG: --cloud-config="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841656 4827 flags.go:64] FLAG: --cloud-provider="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841667 4827 flags.go:64] FLAG: --cluster-dns="[]" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841680 4827 flags.go:64] FLAG: --cluster-domain="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841693 4827 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841705 4827 flags.go:64] FLAG: --config-dir="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841716 4827 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841728 4827 flags.go:64] FLAG: --container-log-max-files="5" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841742 4827 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841753 4827 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841765 4827 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841777 4827 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841789 4827 flags.go:64] FLAG: --contention-profiling="false" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841801 4827 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841812 4827 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841824 4827 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841836 4827 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841850 4827 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841861 4827 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841872 4827 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841919 4827 flags.go:64] FLAG: --enable-load-reader="false" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841931 4827 flags.go:64] FLAG: --enable-server="true" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841942 4827 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841959 4827 flags.go:64] FLAG: --event-burst="100" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841971 4827 flags.go:64] FLAG: --event-qps="50" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841982 4827 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.841993 4827 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842005 4827 flags.go:64] FLAG: --eviction-hard="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842018 4827 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842031 4827 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842042 4827 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842057 4827 flags.go:64] FLAG: --eviction-soft="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842069 4827 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842080 4827 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842092 4827 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842104 4827 flags.go:64] FLAG: --experimental-mounter-path="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842116 4827 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842127 4827 flags.go:64] FLAG: --fail-swap-on="true" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842138 4827 flags.go:64] FLAG: --feature-gates="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842152 4827 flags.go:64] FLAG: --file-check-frequency="20s" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842164 4827 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842176 4827 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842188 4827 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842200 4827 flags.go:64] FLAG: --healthz-port="10248" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842215 4827 flags.go:64] FLAG: --help="false" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842226 4827 flags.go:64] FLAG: --hostname-override="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842237 4827 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842249 4827 flags.go:64] FLAG: --http-check-frequency="20s" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842260 4827 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842272 4827 flags.go:64] FLAG: --image-credential-provider-config="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842283 4827 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842294 4827 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842305 4827 flags.go:64] FLAG: --image-service-endpoint="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842318 4827 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842329 4827 flags.go:64] FLAG: --kube-api-burst="100" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842341 4827 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842353 4827 flags.go:64] FLAG: --kube-api-qps="50" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842365 4827 flags.go:64] FLAG: --kube-reserved="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842376 4827 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842389 4827 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842402 4827 flags.go:64] FLAG: --kubelet-cgroups="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842413 4827 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842425 4827 flags.go:64] FLAG: --lock-file="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842436 4827 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842447 4827 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842459 4827 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842491 4827 flags.go:64] FLAG: --log-json-split-stream="false" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842505 4827 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842517 4827 flags.go:64] FLAG: --log-text-split-stream="false" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842529 4827 flags.go:64] FLAG: --logging-format="text" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842541 4827 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842553 4827 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842564 4827 flags.go:64] FLAG: --manifest-url="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842576 4827 flags.go:64] FLAG: --manifest-url-header="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842591 4827 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842603 4827 flags.go:64] FLAG: --max-open-files="1000000" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842618 4827 flags.go:64] FLAG: --max-pods="110" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842630 4827 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842642 4827 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842654 4827 flags.go:64] FLAG: --memory-manager-policy="None" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842665 4827 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842678 4827 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842689 4827 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842701 4827 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842727 4827 flags.go:64] FLAG: --node-status-max-images="50" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842740 4827 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842752 4827 flags.go:64] FLAG: --oom-score-adj="-999" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842764 4827 flags.go:64] FLAG: --pod-cidr="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842775 4827 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842792 4827 flags.go:64] FLAG: --pod-manifest-path="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842804 4827 flags.go:64] FLAG: --pod-max-pids="-1" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842817 4827 flags.go:64] FLAG: --pods-per-core="0" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842828 4827 flags.go:64] FLAG: --port="10250" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842841 4827 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842852 4827 flags.go:64] FLAG: --provider-id="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842863 4827 flags.go:64] FLAG: --qos-reserved="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842875 4827 flags.go:64] FLAG: --read-only-port="10255" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842927 4827 flags.go:64] FLAG: --register-node="true" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842940 4827 flags.go:64] FLAG: --register-schedulable="true" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842952 4827 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842973 4827 flags.go:64] FLAG: --registry-burst="10" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842986 4827 flags.go:64] FLAG: --registry-qps="5" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.842997 4827 flags.go:64] FLAG: --reserved-cpus="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843013 4827 flags.go:64] FLAG: --reserved-memory="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843027 4827 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843040 4827 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843053 4827 flags.go:64] FLAG: --rotate-certificates="false" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843065 4827 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843077 4827 flags.go:64] FLAG: --runonce="false" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843088 4827 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843100 4827 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843134 4827 flags.go:64] FLAG: --seccomp-default="false" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843147 4827 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843159 4827 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843171 4827 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843183 4827 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843195 4827 flags.go:64] FLAG: --storage-driver-password="root" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843207 4827 flags.go:64] FLAG: --storage-driver-secure="false" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843232 4827 flags.go:64] FLAG: --storage-driver-table="stats" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843258 4827 flags.go:64] FLAG: --storage-driver-user="root" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843271 4827 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843284 4827 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843296 4827 flags.go:64] FLAG: --system-cgroups="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843308 4827 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843328 4827 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843339 4827 flags.go:64] FLAG: --tls-cert-file="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843350 4827 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843364 4827 flags.go:64] FLAG: --tls-min-version="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843376 4827 flags.go:64] FLAG: --tls-private-key-file="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843387 4827 flags.go:64] FLAG: --topology-manager-policy="none" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843399 4827 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843410 4827 flags.go:64] FLAG: --topology-manager-scope="container" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843423 4827 flags.go:64] FLAG: --v="2" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843438 4827 flags.go:64] FLAG: --version="false" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843452 4827 flags.go:64] FLAG: --vmodule="" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843465 4827 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.843478 4827 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.843729 4827 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.843745 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.843757 4827 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.843770 4827 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.843784 4827 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.843796 4827 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.843807 4827 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.843817 4827 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.843829 4827 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.843840 4827 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.843851 4827 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.843862 4827 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.843873 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.843924 4827 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.843936 4827 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.843947 4827 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.843960 4827 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.843972 4827 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.843983 4827 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.843994 4827 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844004 4827 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844014 4827 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844024 4827 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844033 4827 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844043 4827 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844053 4827 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844065 4827 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844076 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844085 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844095 4827 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844105 4827 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844115 4827 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844128 4827 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844139 4827 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844151 4827 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844162 4827 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844173 4827 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844199 4827 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844213 4827 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844225 4827 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844239 4827 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844252 4827 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844263 4827 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844274 4827 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844284 4827 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844299 4827 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844309 4827 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844319 4827 feature_gate.go:330] unrecognized feature gate: Example Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844329 4827 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844339 4827 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844349 4827 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844359 4827 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844369 4827 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844379 4827 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844390 4827 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844400 4827 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844409 4827 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844419 4827 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844429 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844439 4827 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844449 4827 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844459 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844468 4827 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844479 4827 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844489 4827 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844499 4827 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844508 4827 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844519 4827 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844529 4827 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844544 4827 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.844553 4827 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.844585 4827 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.856571 4827 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.856625 4827 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856761 4827 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856776 4827 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856785 4827 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856794 4827 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856802 4827 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856810 4827 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856818 4827 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856826 4827 feature_gate.go:330] unrecognized feature gate: Example Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856834 4827 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856842 4827 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856849 4827 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856857 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856864 4827 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856873 4827 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856910 4827 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856918 4827 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856926 4827 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856934 4827 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856942 4827 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856950 4827 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856958 4827 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856965 4827 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856973 4827 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856981 4827 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856989 4827 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.856997 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857004 4827 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857012 4827 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857020 4827 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857029 4827 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857037 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857044 4827 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857052 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857060 4827 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857069 4827 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857077 4827 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857085 4827 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857093 4827 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857100 4827 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857110 4827 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857117 4827 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857128 4827 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857141 4827 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857150 4827 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857158 4827 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857166 4827 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857175 4827 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857182 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857191 4827 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857199 4827 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857207 4827 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857214 4827 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857222 4827 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857230 4827 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857237 4827 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857245 4827 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857255 4827 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857265 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857273 4827 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857281 4827 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857289 4827 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857298 4827 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857306 4827 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857313 4827 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857323 4827 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857333 4827 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857342 4827 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857352 4827 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857360 4827 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857371 4827 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857381 4827 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.857395 4827 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857678 4827 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857693 4827 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857702 4827 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857711 4827 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857719 4827 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857727 4827 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857735 4827 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857745 4827 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857755 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857762 4827 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857770 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857778 4827 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857786 4827 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857793 4827 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857801 4827 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857809 4827 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857816 4827 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857824 4827 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857831 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857839 4827 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857849 4827 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857857 4827 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857865 4827 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857873 4827 feature_gate.go:330] unrecognized feature gate: Example Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857913 4827 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857924 4827 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857934 4827 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857943 4827 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857953 4827 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857967 4827 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857978 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857987 4827 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.857995 4827 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858005 4827 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858016 4827 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858026 4827 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858035 4827 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858045 4827 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858055 4827 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858064 4827 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858075 4827 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858087 4827 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858099 4827 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858111 4827 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858122 4827 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858134 4827 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858144 4827 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858154 4827 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858164 4827 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858174 4827 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858185 4827 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858195 4827 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858205 4827 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858215 4827 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858226 4827 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858236 4827 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858245 4827 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858253 4827 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858260 4827 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858270 4827 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858280 4827 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858293 4827 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858306 4827 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858317 4827 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858327 4827 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858339 4827 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858349 4827 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858359 4827 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858369 4827 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858378 4827 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 03:46:47 crc kubenswrapper[4827]: W0131 03:46:47.858390 4827 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.858405 4827 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.858738 4827 server.go:940] "Client rotation is on, will bootstrap in background" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.865623 4827 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.865754 4827 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.868136 4827 server.go:997] "Starting client certificate rotation" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.868228 4827 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.871392 4827 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-26 12:19:41.816873271 +0000 UTC Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.871563 4827 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.903053 4827 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 03:46:47 crc kubenswrapper[4827]: E0131 03:46:47.907835 4827 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.908527 4827 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.930118 4827 log.go:25] "Validated CRI v1 runtime API" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.967447 4827 log.go:25] "Validated CRI v1 image API" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.969558 4827 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.977010 4827 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-31-03-41-55-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 31 03:46:47 crc kubenswrapper[4827]: I0131 03:46:47.977061 4827 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.005270 4827 manager.go:217] Machine: {Timestamp:2026-01-31 03:46:48.000324934 +0000 UTC m=+0.687405413 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:9b087aa6-4510-46ee-bc39-2317e4ea4d1d BootID:0f4c9cc6-4be7-45e8-ab30-471f307c1c16 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:8b:0f:50 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:8b:0f:50 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:28:f3:7d Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:60:07:fc Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:94:a4:8a Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:1c:62:83 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:f5:97:ca Speed:-1 Mtu:1496} {Name:eth10 MacAddress:32:8e:9f:12:82:2c Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:a6:ac:f9:73:ff:5e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.005857 4827 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.006136 4827 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.008187 4827 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.008479 4827 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.008531 4827 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.008860 4827 topology_manager.go:138] "Creating topology manager with none policy" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.008905 4827 container_manager_linux.go:303] "Creating device plugin manager" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.009622 4827 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.010399 4827 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.010609 4827 state_mem.go:36] "Initialized new in-memory state store" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.010746 4827 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.016526 4827 kubelet.go:418] "Attempting to sync node with API server" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.016559 4827 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.016584 4827 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.016604 4827 kubelet.go:324] "Adding apiserver pod source" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.016622 4827 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 31 03:46:48 crc kubenswrapper[4827]: W0131 03:46:48.021583 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 31 03:46:48 crc kubenswrapper[4827]: E0131 03:46:48.021716 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:46:48 crc kubenswrapper[4827]: W0131 03:46:48.023170 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 31 03:46:48 crc kubenswrapper[4827]: E0131 03:46:48.023268 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.024978 4827 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.026175 4827 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.028104 4827 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.029958 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.029997 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.030012 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.030026 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.030046 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.030060 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.030073 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.030094 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.030112 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.030125 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.030142 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.030156 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.035097 4827 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.036904 4827 server.go:1280] "Started kubelet" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.039178 4827 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.039093 4827 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 31 03:46:48 crc systemd[1]: Started Kubernetes Kubelet. Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.039079 4827 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.040586 4827 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.044077 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.044385 4827 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.044435 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 09:01:10.284402471 +0000 UTC Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.044892 4827 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.044910 4827 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 31 03:46:48 crc kubenswrapper[4827]: E0131 03:46:48.044957 4827 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.045082 4827 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 31 03:46:48 crc kubenswrapper[4827]: W0131 03:46:48.045681 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 31 03:46:48 crc kubenswrapper[4827]: E0131 03:46:48.045755 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.046591 4827 server.go:460] "Adding debug handlers to kubelet server" Jan 31 03:46:48 crc kubenswrapper[4827]: E0131 03:46:48.046980 4827 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fb41b536f826b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 03:46:48.036827755 +0000 UTC m=+0.723908244,LastTimestamp:2026-01-31 03:46:48.036827755 +0000 UTC m=+0.723908244,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.051394 4827 factory.go:55] Registering systemd factory Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.051549 4827 factory.go:221] Registration of the systemd container factory successfully Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.052188 4827 factory.go:153] Registering CRI-O factory Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.052413 4827 factory.go:221] Registration of the crio container factory successfully Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.052639 4827 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.052790 4827 factory.go:103] Registering Raw factory Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.052962 4827 manager.go:1196] Started watching for new ooms in manager Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.054093 4827 manager.go:319] Starting recovery of all containers Jan 31 03:46:48 crc kubenswrapper[4827]: E0131 03:46:48.055229 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="200ms" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058194 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058243 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058256 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058268 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058280 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058290 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058301 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058313 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058326 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058336 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058347 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058359 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058370 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058385 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058396 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058408 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058420 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058430 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058442 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058453 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058463 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058475 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058485 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058496 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058510 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058522 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058536 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058586 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058599 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058610 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058621 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058634 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058645 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058656 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058667 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058679 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058691 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058704 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058718 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058730 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058741 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058753 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058771 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058782 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058794 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058807 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058818 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058828 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058841 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058851 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058863 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058874 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058917 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058931 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058944 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058958 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058973 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058985 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.058998 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059046 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059059 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059070 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059082 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059095 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059106 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059117 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059129 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059141 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059153 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059163 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059177 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059189 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059202 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059214 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059226 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059239 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059251 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059263 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059276 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059288 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059299 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059310 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059321 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059332 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059343 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059355 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059367 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059379 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059391 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059402 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059415 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059426 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059437 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059448 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059460 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059470 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059481 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059493 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059504 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059515 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059528 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059539 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059550 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059562 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059579 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059591 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059610 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059623 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059636 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059649 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059662 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059673 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059686 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059700 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059713 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059724 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059738 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059753 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059766 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059780 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059791 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059803 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059815 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059829 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059841 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059853 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059866 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059899 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059940 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059955 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.059966 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.060072 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.060086 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.060098 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.060110 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.060159 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.060171 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.060184 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.060198 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.060210 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.060222 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.060235 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.060246 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.060281 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.060292 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.060305 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.063875 4827 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.063932 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.063951 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.063970 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.063990 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064004 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064019 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064036 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064051 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064066 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064081 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064098 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064113 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064156 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064173 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064190 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064208 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064224 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064240 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064256 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064273 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064290 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064306 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064322 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064340 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064356 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064373 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064389 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064429 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064445 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064460 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064477 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064494 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064509 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064525 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064539 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064556 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064571 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064587 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064604 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064620 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064637 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064654 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064670 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064690 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064707 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064726 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064742 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064758 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064774 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064790 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064807 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064825 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064843 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064860 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064877 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064917 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064934 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064951 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064966 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.064983 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.065001 4827 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.065016 4827 reconstruct.go:97] "Volume reconstruction finished" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.065027 4827 reconciler.go:26] "Reconciler: start to sync state" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.090678 4827 manager.go:324] Recovery completed Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.104662 4827 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.108613 4827 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.108683 4827 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.108723 4827 kubelet.go:2335] "Starting kubelet main sync loop" Jan 31 03:46:48 crc kubenswrapper[4827]: E0131 03:46:48.108812 4827 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.110634 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:48 crc kubenswrapper[4827]: W0131 03:46:48.112066 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 31 03:46:48 crc kubenswrapper[4827]: E0131 03:46:48.112156 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.112757 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.112837 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.112850 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.113953 4827 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.113991 4827 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.114025 4827 state_mem.go:36] "Initialized new in-memory state store" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.127553 4827 policy_none.go:49] "None policy: Start" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.129289 4827 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.129326 4827 state_mem.go:35] "Initializing new in-memory state store" Jan 31 03:46:48 crc kubenswrapper[4827]: E0131 03:46:48.145409 4827 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.182817 4827 manager.go:334] "Starting Device Plugin manager" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.183171 4827 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.183188 4827 server.go:79] "Starting device plugin registration server" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.183617 4827 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.183635 4827 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.183959 4827 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.184061 4827 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.184074 4827 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 31 03:46:48 crc kubenswrapper[4827]: E0131 03:46:48.191135 4827 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.209037 4827 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.209150 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.211265 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.211320 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.211333 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.211504 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.211971 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.212048 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.212454 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.212477 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.212488 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.212571 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.212758 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.212821 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.213091 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.213137 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.213160 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.213161 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.213227 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.213240 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.213367 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.213466 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.213520 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.214332 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.214352 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.214359 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.214366 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.214370 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.214377 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.214586 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.214612 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.214624 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.214767 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.215003 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.215035 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.216030 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.216049 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.216034 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.216068 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.216092 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.216056 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.216249 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.216266 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.217655 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.217702 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.217713 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:48 crc kubenswrapper[4827]: E0131 03:46:48.256210 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="400ms" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.268321 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.268361 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.268385 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.268402 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.268419 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.268440 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.268509 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.268587 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.268619 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.268695 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.268740 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.268762 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.268803 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.268824 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.268941 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.284705 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.286355 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.286411 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.286422 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.286451 4827 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 03:46:48 crc kubenswrapper[4827]: E0131 03:46:48.287117 4827 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.369779 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.369849 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.369905 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.369944 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.369975 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370001 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370030 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370058 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370113 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370174 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370197 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370235 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370091 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370269 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370312 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370300 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370260 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370341 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370449 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370416 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370491 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370532 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370562 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370583 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370615 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370632 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370587 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370696 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370711 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.370779 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.488309 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.489672 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.489709 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.489721 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.489746 4827 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 03:46:48 crc kubenswrapper[4827]: E0131 03:46:48.490268 4827 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.543056 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.549512 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.566102 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.570620 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.595795 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 03:46:48 crc kubenswrapper[4827]: W0131 03:46:48.607234 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-b226bce0263d9d1447c4e92fabb92cba812690710ebc3b941435bba44e0e7a5a WatchSource:0}: Error finding container b226bce0263d9d1447c4e92fabb92cba812690710ebc3b941435bba44e0e7a5a: Status 404 returned error can't find the container with id b226bce0263d9d1447c4e92fabb92cba812690710ebc3b941435bba44e0e7a5a Jan 31 03:46:48 crc kubenswrapper[4827]: W0131 03:46:48.613143 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-3681525c14d1a6a9147c7f8a874de3b58d2b75f207a40a212298e492d5773c36 WatchSource:0}: Error finding container 3681525c14d1a6a9147c7f8a874de3b58d2b75f207a40a212298e492d5773c36: Status 404 returned error can't find the container with id 3681525c14d1a6a9147c7f8a874de3b58d2b75f207a40a212298e492d5773c36 Jan 31 03:46:48 crc kubenswrapper[4827]: W0131 03:46:48.614008 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-0ecccd17555400d7fdd7aa614c611ddb45f7a1d754acafc9784bd07219d258bf WatchSource:0}: Error finding container 0ecccd17555400d7fdd7aa614c611ddb45f7a1d754acafc9784bd07219d258bf: Status 404 returned error can't find the container with id 0ecccd17555400d7fdd7aa614c611ddb45f7a1d754acafc9784bd07219d258bf Jan 31 03:46:48 crc kubenswrapper[4827]: W0131 03:46:48.620413 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-e40fbd2a6c48425d4adb70948bc3e957c200db12a5e17dc15c83008720ad517b WatchSource:0}: Error finding container e40fbd2a6c48425d4adb70948bc3e957c200db12a5e17dc15c83008720ad517b: Status 404 returned error can't find the container with id e40fbd2a6c48425d4adb70948bc3e957c200db12a5e17dc15c83008720ad517b Jan 31 03:46:48 crc kubenswrapper[4827]: W0131 03:46:48.622630 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-a1d374a4a91754fcddebcd85f966036788d02749d5fcd644cfa79673db69ae02 WatchSource:0}: Error finding container a1d374a4a91754fcddebcd85f966036788d02749d5fcd644cfa79673db69ae02: Status 404 returned error can't find the container with id a1d374a4a91754fcddebcd85f966036788d02749d5fcd644cfa79673db69ae02 Jan 31 03:46:48 crc kubenswrapper[4827]: E0131 03:46:48.657145 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="800ms" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.890647 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.892776 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.892830 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.892848 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:48 crc kubenswrapper[4827]: I0131 03:46:48.892910 4827 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 03:46:48 crc kubenswrapper[4827]: E0131 03:46:48.893448 4827 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Jan 31 03:46:48 crc kubenswrapper[4827]: W0131 03:46:48.923514 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 31 03:46:48 crc kubenswrapper[4827]: E0131 03:46:48.923647 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:46:49 crc kubenswrapper[4827]: W0131 03:46:49.011687 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 31 03:46:49 crc kubenswrapper[4827]: E0131 03:46:49.011855 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:46:49 crc kubenswrapper[4827]: I0131 03:46:49.040654 4827 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 31 03:46:49 crc kubenswrapper[4827]: I0131 03:46:49.045861 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:25:01.129302823 +0000 UTC Jan 31 03:46:49 crc kubenswrapper[4827]: W0131 03:46:49.050734 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 31 03:46:49 crc kubenswrapper[4827]: E0131 03:46:49.050797 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:46:49 crc kubenswrapper[4827]: I0131 03:46:49.116547 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0ecccd17555400d7fdd7aa614c611ddb45f7a1d754acafc9784bd07219d258bf"} Jan 31 03:46:49 crc kubenswrapper[4827]: I0131 03:46:49.117760 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e40fbd2a6c48425d4adb70948bc3e957c200db12a5e17dc15c83008720ad517b"} Jan 31 03:46:49 crc kubenswrapper[4827]: I0131 03:46:49.118899 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3681525c14d1a6a9147c7f8a874de3b58d2b75f207a40a212298e492d5773c36"} Jan 31 03:46:49 crc kubenswrapper[4827]: I0131 03:46:49.122275 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b226bce0263d9d1447c4e92fabb92cba812690710ebc3b941435bba44e0e7a5a"} Jan 31 03:46:49 crc kubenswrapper[4827]: I0131 03:46:49.124942 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a1d374a4a91754fcddebcd85f966036788d02749d5fcd644cfa79673db69ae02"} Jan 31 03:46:49 crc kubenswrapper[4827]: W0131 03:46:49.313952 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 31 03:46:49 crc kubenswrapper[4827]: E0131 03:46:49.314039 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:46:49 crc kubenswrapper[4827]: E0131 03:46:49.458337 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="1.6s" Jan 31 03:46:49 crc kubenswrapper[4827]: I0131 03:46:49.694572 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:49 crc kubenswrapper[4827]: I0131 03:46:49.696076 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:49 crc kubenswrapper[4827]: I0131 03:46:49.696171 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:49 crc kubenswrapper[4827]: I0131 03:46:49.696192 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:49 crc kubenswrapper[4827]: I0131 03:46:49.696256 4827 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 03:46:49 crc kubenswrapper[4827]: E0131 03:46:49.696966 4827 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.018737 4827 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 03:46:50 crc kubenswrapper[4827]: E0131 03:46:50.019812 4827 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.040784 4827 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.046319 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 10:36:18.848076569 +0000 UTC Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.132136 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c"} Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.132215 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.132226 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237"} Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.132248 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a"} Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.132264 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5"} Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.133399 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.133448 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.133465 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.135617 4827 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="93d9cbfbbaa1a57bc880c1eadef877e52c66b8645127830242a33ad4fe9eecad" exitCode=0 Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.135702 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"93d9cbfbbaa1a57bc880c1eadef877e52c66b8645127830242a33ad4fe9eecad"} Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.135851 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.137441 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.137483 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.137504 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.138131 4827 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e35cf3948fcca6a410837ad274e5f47621a1287617f9f126b0638fc41634b722" exitCode=0 Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.138199 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.138236 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e35cf3948fcca6a410837ad274e5f47621a1287617f9f126b0638fc41634b722"} Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.140280 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.140327 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.140343 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.141632 4827 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07" exitCode=0 Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.141723 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.141733 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07"} Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.143063 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.143090 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.143105 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.147949 4827 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd" exitCode=0 Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.148205 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.148009 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd"} Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.149604 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.149642 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.149658 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.152769 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.153581 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.153712 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:50 crc kubenswrapper[4827]: I0131 03:46:50.153814 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.040466 4827 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.047083 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 14:20:35.254029867 +0000 UTC Jan 31 03:46:51 crc kubenswrapper[4827]: E0131 03:46:51.059681 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="3.2s" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.154412 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7b5f30db796321372897f564fda3a031f0714999315931e2a77b21fde674f4fe"} Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.154502 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3d0070a036855a06f83caec5b6ee636f1a7a1e7f3246b779e0abbcf6aebf259a"} Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.154527 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"19a6d75598a153bb0745bb06aa53961f13fa3ff1fd4addb84b3ecdc66d07ee4e"} Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.154435 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.155914 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.155952 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.155966 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.159298 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1"} Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.159365 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626"} Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.159380 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874"} Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.159390 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9"} Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.161110 4827 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="34a0f079ed201b286f30a3febf53506456336b1b3393890ca39917e49adca571" exitCode=0 Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.161234 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"34a0f079ed201b286f30a3febf53506456336b1b3393890ca39917e49adca571"} Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.161355 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.162626 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.162656 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.162668 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.163813 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d529d76d3e5d8974e4c40a38910f933f23fcac20fc41884ff762a482e5d2ea4f"} Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.167223 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.167262 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.169272 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.169389 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.169559 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.169655 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.169732 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.170024 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.297550 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.299146 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.299189 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.299203 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:51 crc kubenswrapper[4827]: I0131 03:46:51.299236 4827 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 03:46:51 crc kubenswrapper[4827]: E0131 03:46:51.299981 4827 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Jan 31 03:46:51 crc kubenswrapper[4827]: E0131 03:46:51.302551 4827 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fb41b536f826b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 03:46:48.036827755 +0000 UTC m=+0.723908244,LastTimestamp:2026-01-31 03:46:48.036827755 +0000 UTC m=+0.723908244,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 03:46:51 crc kubenswrapper[4827]: W0131 03:46:51.328670 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 31 03:46:51 crc kubenswrapper[4827]: E0131 03:46:51.328819 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:46:51 crc kubenswrapper[4827]: W0131 03:46:51.514269 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 31 03:46:51 crc kubenswrapper[4827]: E0131 03:46:51.514383 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:46:51 crc kubenswrapper[4827]: W0131 03:46:51.591972 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 31 03:46:51 crc kubenswrapper[4827]: E0131 03:46:51.592099 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:46:51 crc kubenswrapper[4827]: W0131 03:46:51.726326 4827 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 31 03:46:51 crc kubenswrapper[4827]: E0131 03:46:51.726470 4827 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.047840 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 05:13:52.986793993 +0000 UTC Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.170531 4827 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0b40907772b53aa24e77d0da09972ea1ebf4bfb3451bd294d7071e477878b8c6" exitCode=0 Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.170625 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0b40907772b53aa24e77d0da09972ea1ebf4bfb3451bd294d7071e477878b8c6"} Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.170765 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.172372 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.172429 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.172450 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.175492 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde"} Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.175547 4827 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.175648 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.175711 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.175722 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.177418 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.177467 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.177488 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.177546 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.177584 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.177602 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.178025 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.178088 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.178109 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:52 crc kubenswrapper[4827]: I0131 03:46:52.650217 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:46:53 crc kubenswrapper[4827]: I0131 03:46:53.048457 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 13:14:44.281277543 +0000 UTC Jan 31 03:46:53 crc kubenswrapper[4827]: I0131 03:46:53.184213 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"753f15aab7eadd5e66325daed9c1b711867547bb0e7f7a66a625333020771c36"} Jan 31 03:46:53 crc kubenswrapper[4827]: I0131 03:46:53.184264 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a6326f7e45db29abaa4d868250cbaee4f7c0253cf1d97a1fa52342c889e7cce9"} Jan 31 03:46:53 crc kubenswrapper[4827]: I0131 03:46:53.184276 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5e6eb640d651ffa7c4ea6ebf12f3e5ed25c8c3ed6a266bd7d00ced163351fdc0"} Jan 31 03:46:53 crc kubenswrapper[4827]: I0131 03:46:53.184396 4827 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 03:46:53 crc kubenswrapper[4827]: I0131 03:46:53.184478 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:53 crc kubenswrapper[4827]: I0131 03:46:53.186014 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:53 crc kubenswrapper[4827]: I0131 03:46:53.186085 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:53 crc kubenswrapper[4827]: I0131 03:46:53.186104 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:53 crc kubenswrapper[4827]: I0131 03:46:53.699542 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:46:53 crc kubenswrapper[4827]: I0131 03:46:53.699755 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:53 crc kubenswrapper[4827]: I0131 03:46:53.701138 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:53 crc kubenswrapper[4827]: I0131 03:46:53.701224 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:53 crc kubenswrapper[4827]: I0131 03:46:53.701294 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:53 crc kubenswrapper[4827]: I0131 03:46:53.759775 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:46:53 crc kubenswrapper[4827]: I0131 03:46:53.913857 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:46:54 crc kubenswrapper[4827]: I0131 03:46:54.048858 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:28:18.074548174 +0000 UTC Jan 31 03:46:54 crc kubenswrapper[4827]: I0131 03:46:54.193357 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"92df2693729c1dd3423580206df4937a7e955514f7403a844700f88c6522457e"} Jan 31 03:46:54 crc kubenswrapper[4827]: I0131 03:46:54.193763 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2c4c0d8f3e2e4e5aa0d4856dadd352d9d7492f5d95bb15f65752c0c47ff66fff"} Jan 31 03:46:54 crc kubenswrapper[4827]: I0131 03:46:54.193450 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:54 crc kubenswrapper[4827]: I0131 03:46:54.193558 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:54 crc kubenswrapper[4827]: I0131 03:46:54.193441 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:54 crc kubenswrapper[4827]: I0131 03:46:54.195651 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:54 crc kubenswrapper[4827]: I0131 03:46:54.195699 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:54 crc kubenswrapper[4827]: I0131 03:46:54.195718 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:54 crc kubenswrapper[4827]: I0131 03:46:54.195765 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:54 crc kubenswrapper[4827]: I0131 03:46:54.195801 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:54 crc kubenswrapper[4827]: I0131 03:46:54.195817 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:54 crc kubenswrapper[4827]: I0131 03:46:54.196837 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:54 crc kubenswrapper[4827]: I0131 03:46:54.196948 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:54 crc kubenswrapper[4827]: I0131 03:46:54.196964 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:54 crc kubenswrapper[4827]: I0131 03:46:54.201434 4827 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 03:46:54 crc kubenswrapper[4827]: I0131 03:46:54.500393 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:54 crc kubenswrapper[4827]: I0131 03:46:54.502701 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:54 crc kubenswrapper[4827]: I0131 03:46:54.502785 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:54 crc kubenswrapper[4827]: I0131 03:46:54.502807 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:54 crc kubenswrapper[4827]: I0131 03:46:54.502848 4827 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 03:46:55 crc kubenswrapper[4827]: I0131 03:46:55.050264 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 19:12:27.278791206 +0000 UTC Jan 31 03:46:55 crc kubenswrapper[4827]: I0131 03:46:55.196611 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:55 crc kubenswrapper[4827]: I0131 03:46:55.196764 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:55 crc kubenswrapper[4827]: I0131 03:46:55.197633 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:55 crc kubenswrapper[4827]: I0131 03:46:55.197691 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:55 crc kubenswrapper[4827]: I0131 03:46:55.197709 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:55 crc kubenswrapper[4827]: I0131 03:46:55.198277 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:55 crc kubenswrapper[4827]: I0131 03:46:55.198300 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:55 crc kubenswrapper[4827]: I0131 03:46:55.198317 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:56 crc kubenswrapper[4827]: I0131 03:46:56.051343 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 18:08:02.18218928 +0000 UTC Jan 31 03:46:56 crc kubenswrapper[4827]: I0131 03:46:56.265957 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 31 03:46:56 crc kubenswrapper[4827]: I0131 03:46:56.266232 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:56 crc kubenswrapper[4827]: I0131 03:46:56.268079 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:56 crc kubenswrapper[4827]: I0131 03:46:56.268134 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:56 crc kubenswrapper[4827]: I0131 03:46:56.268154 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:56 crc kubenswrapper[4827]: I0131 03:46:56.284466 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:46:56 crc kubenswrapper[4827]: I0131 03:46:56.284811 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:56 crc kubenswrapper[4827]: I0131 03:46:56.286755 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:56 crc kubenswrapper[4827]: I0131 03:46:56.286824 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:56 crc kubenswrapper[4827]: I0131 03:46:56.286861 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:56 crc kubenswrapper[4827]: I0131 03:46:56.700633 4827 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 03:46:56 crc kubenswrapper[4827]: I0131 03:46:56.700761 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 03:46:57 crc kubenswrapper[4827]: I0131 03:46:57.052059 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 11:24:33.661574628 +0000 UTC Jan 31 03:46:57 crc kubenswrapper[4827]: I0131 03:46:57.722147 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:46:57 crc kubenswrapper[4827]: I0131 03:46:57.722788 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:57 crc kubenswrapper[4827]: I0131 03:46:57.724726 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:57 crc kubenswrapper[4827]: I0131 03:46:57.724826 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:57 crc kubenswrapper[4827]: I0131 03:46:57.724860 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:58 crc kubenswrapper[4827]: I0131 03:46:58.052652 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:47:01.581253014 +0000 UTC Jan 31 03:46:58 crc kubenswrapper[4827]: E0131 03:46:58.191864 4827 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 03:46:58 crc kubenswrapper[4827]: I0131 03:46:58.230736 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 03:46:58 crc kubenswrapper[4827]: I0131 03:46:58.231103 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:58 crc kubenswrapper[4827]: I0131 03:46:58.233229 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:58 crc kubenswrapper[4827]: I0131 03:46:58.233309 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:58 crc kubenswrapper[4827]: I0131 03:46:58.233331 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:58 crc kubenswrapper[4827]: I0131 03:46:58.823525 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:46:58 crc kubenswrapper[4827]: I0131 03:46:58.823936 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:58 crc kubenswrapper[4827]: I0131 03:46:58.825968 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:58 crc kubenswrapper[4827]: I0131 03:46:58.826042 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:58 crc kubenswrapper[4827]: I0131 03:46:58.826080 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:58 crc kubenswrapper[4827]: I0131 03:46:58.829822 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:46:59 crc kubenswrapper[4827]: I0131 03:46:59.054485 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 17:38:40.929780486 +0000 UTC Jan 31 03:46:59 crc kubenswrapper[4827]: I0131 03:46:59.207996 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:46:59 crc kubenswrapper[4827]: I0131 03:46:59.209200 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:46:59 crc kubenswrapper[4827]: I0131 03:46:59.209298 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:46:59 crc kubenswrapper[4827]: I0131 03:46:59.209313 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:46:59 crc kubenswrapper[4827]: I0131 03:46:59.214232 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:47:00 crc kubenswrapper[4827]: I0131 03:47:00.055206 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 16:35:50.663950313 +0000 UTC Jan 31 03:47:00 crc kubenswrapper[4827]: I0131 03:47:00.211097 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:00 crc kubenswrapper[4827]: I0131 03:47:00.212922 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:00 crc kubenswrapper[4827]: I0131 03:47:00.213001 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:00 crc kubenswrapper[4827]: I0131 03:47:00.213027 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:01 crc kubenswrapper[4827]: I0131 03:47:01.056098 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 10:26:00.706853361 +0000 UTC Jan 31 03:47:02 crc kubenswrapper[4827]: I0131 03:47:02.040723 4827 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 31 03:47:02 crc kubenswrapper[4827]: I0131 03:47:02.057126 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 13:24:02.450088116 +0000 UTC Jan 31 03:47:02 crc kubenswrapper[4827]: I0131 03:47:02.219476 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 03:47:02 crc kubenswrapper[4827]: I0131 03:47:02.221301 4827 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde" exitCode=255 Jan 31 03:47:02 crc kubenswrapper[4827]: I0131 03:47:02.221336 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde"} Jan 31 03:47:02 crc kubenswrapper[4827]: I0131 03:47:02.221469 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:02 crc kubenswrapper[4827]: I0131 03:47:02.222179 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:02 crc kubenswrapper[4827]: I0131 03:47:02.222205 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:02 crc kubenswrapper[4827]: I0131 03:47:02.222215 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:02 crc kubenswrapper[4827]: I0131 03:47:02.222641 4827 scope.go:117] "RemoveContainer" containerID="cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde" Jan 31 03:47:02 crc kubenswrapper[4827]: I0131 03:47:02.650609 4827 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 03:47:02 crc kubenswrapper[4827]: I0131 03:47:02.650736 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 03:47:02 crc kubenswrapper[4827]: I0131 03:47:02.991196 4827 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 03:47:02 crc kubenswrapper[4827]: I0131 03:47:02.991321 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 03:47:03 crc kubenswrapper[4827]: I0131 03:47:03.058181 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 23:25:39.718916434 +0000 UTC Jan 31 03:47:03 crc kubenswrapper[4827]: I0131 03:47:03.225489 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 03:47:03 crc kubenswrapper[4827]: I0131 03:47:03.227122 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46"} Jan 31 03:47:03 crc kubenswrapper[4827]: I0131 03:47:03.227295 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:03 crc kubenswrapper[4827]: I0131 03:47:03.228175 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:03 crc kubenswrapper[4827]: I0131 03:47:03.228213 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:03 crc kubenswrapper[4827]: I0131 03:47:03.228224 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:03 crc kubenswrapper[4827]: I0131 03:47:03.914284 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:47:04 crc kubenswrapper[4827]: I0131 03:47:04.058780 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 14:05:53.806215056 +0000 UTC Jan 31 03:47:04 crc kubenswrapper[4827]: I0131 03:47:04.119395 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 31 03:47:04 crc kubenswrapper[4827]: I0131 03:47:04.119784 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:04 crc kubenswrapper[4827]: I0131 03:47:04.121158 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:04 crc kubenswrapper[4827]: I0131 03:47:04.121202 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:04 crc kubenswrapper[4827]: I0131 03:47:04.121218 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:04 crc kubenswrapper[4827]: I0131 03:47:04.141981 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 31 03:47:04 crc kubenswrapper[4827]: I0131 03:47:04.229344 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:04 crc kubenswrapper[4827]: I0131 03:47:04.229356 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:04 crc kubenswrapper[4827]: I0131 03:47:04.230486 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:04 crc kubenswrapper[4827]: I0131 03:47:04.230553 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:04 crc kubenswrapper[4827]: I0131 03:47:04.230594 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:04 crc kubenswrapper[4827]: I0131 03:47:04.230612 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:04 crc kubenswrapper[4827]: I0131 03:47:04.230564 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:04 crc kubenswrapper[4827]: I0131 03:47:04.230671 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:04 crc kubenswrapper[4827]: I0131 03:47:04.241785 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 31 03:47:05 crc kubenswrapper[4827]: I0131 03:47:05.059706 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 20:47:10.519988491 +0000 UTC Jan 31 03:47:05 crc kubenswrapper[4827]: I0131 03:47:05.232239 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:05 crc kubenswrapper[4827]: I0131 03:47:05.233665 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:05 crc kubenswrapper[4827]: I0131 03:47:05.233717 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:05 crc kubenswrapper[4827]: I0131 03:47:05.233735 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:06 crc kubenswrapper[4827]: I0131 03:47:06.060046 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 13:48:31.500623334 +0000 UTC Jan 31 03:47:06 crc kubenswrapper[4827]: I0131 03:47:06.700429 4827 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 03:47:06 crc kubenswrapper[4827]: I0131 03:47:06.700503 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 03:47:07 crc kubenswrapper[4827]: I0131 03:47:07.060771 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:00:06.458625734 +0000 UTC Jan 31 03:47:07 crc kubenswrapper[4827]: I0131 03:47:07.660180 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:47:07 crc kubenswrapper[4827]: I0131 03:47:07.660484 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:07 crc kubenswrapper[4827]: I0131 03:47:07.662030 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:07 crc kubenswrapper[4827]: I0131 03:47:07.662090 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:07 crc kubenswrapper[4827]: I0131 03:47:07.662110 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:07 crc kubenswrapper[4827]: I0131 03:47:07.667229 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:47:07 crc kubenswrapper[4827]: E0131 03:47:07.980485 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 31 03:47:07 crc kubenswrapper[4827]: E0131 03:47:07.988205 4827 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 31 03:47:07 crc kubenswrapper[4827]: I0131 03:47:07.988733 4827 trace.go:236] Trace[1922080329]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 03:46:55.912) (total time: 12076ms): Jan 31 03:47:07 crc kubenswrapper[4827]: Trace[1922080329]: ---"Objects listed" error: 12075ms (03:47:07.988) Jan 31 03:47:07 crc kubenswrapper[4827]: Trace[1922080329]: [12.076008249s] [12.076008249s] END Jan 31 03:47:07 crc kubenswrapper[4827]: I0131 03:47:07.988775 4827 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 03:47:07 crc kubenswrapper[4827]: I0131 03:47:07.989780 4827 trace.go:236] Trace[1482844774]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 03:46:56.535) (total time: 11453ms): Jan 31 03:47:07 crc kubenswrapper[4827]: Trace[1482844774]: ---"Objects listed" error: 11453ms (03:47:07.989) Jan 31 03:47:07 crc kubenswrapper[4827]: Trace[1482844774]: [11.453856025s] [11.453856025s] END Jan 31 03:47:07 crc kubenswrapper[4827]: I0131 03:47:07.989826 4827 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 03:47:07 crc kubenswrapper[4827]: I0131 03:47:07.990515 4827 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 31 03:47:07 crc kubenswrapper[4827]: I0131 03:47:07.990530 4827 trace.go:236] Trace[613951735]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 03:46:55.899) (total time: 12091ms): Jan 31 03:47:07 crc kubenswrapper[4827]: Trace[613951735]: ---"Objects listed" error: 12091ms (03:47:07.990) Jan 31 03:47:07 crc kubenswrapper[4827]: Trace[613951735]: [12.091355289s] [12.091355289s] END Jan 31 03:47:07 crc kubenswrapper[4827]: I0131 03:47:07.990618 4827 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 03:47:07 crc kubenswrapper[4827]: I0131 03:47:07.991004 4827 trace.go:236] Trace[2088076006]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 03:46:55.175) (total time: 12814ms): Jan 31 03:47:07 crc kubenswrapper[4827]: Trace[2088076006]: ---"Objects listed" error: 12814ms (03:47:07.990) Jan 31 03:47:07 crc kubenswrapper[4827]: Trace[2088076006]: [12.814977876s] [12.814977876s] END Jan 31 03:47:07 crc kubenswrapper[4827]: I0131 03:47:07.991036 4827 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.003916 4827 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.027839 4827 apiserver.go:52] "Watching apiserver" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.033914 4827 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.034382 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.035364 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.035508 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.035567 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.035600 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.035613 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.035573 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.035562 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.035724 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.035939 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.039159 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.039609 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.040086 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.041291 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.041433 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.041643 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.041701 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.045402 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.048638 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.049171 4827 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.061202 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 13:58:15.761180809 +0000 UTC Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.090659 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.091289 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.091376 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.091434 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.091501 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.091558 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.091610 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.091709 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.091773 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.091824 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.091874 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.091969 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092002 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092029 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092106 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092146 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092171 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092342 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092399 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092449 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092458 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092493 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092527 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092539 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092566 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092613 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092648 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092692 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092743 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092796 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092842 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092936 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092995 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093047 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093101 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093150 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093198 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093238 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093274 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093320 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093372 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093424 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093473 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093564 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093711 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093764 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093809 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093849 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093912 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093945 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093980 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094013 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094045 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094078 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094109 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094142 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094174 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094218 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094250 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094285 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094318 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094353 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092762 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094389 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094416 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094424 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094469 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094558 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094616 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094674 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094734 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094804 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094859 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.095004 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.095108 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.095161 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.095214 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.095757 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.095848 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.095991 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.096054 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.096539 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.096624 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.096686 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.096743 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.096920 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097016 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097056 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097096 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097133 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097170 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097206 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097238 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097274 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097308 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097341 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097375 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097409 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097442 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097478 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097514 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097547 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097618 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097663 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097721 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097775 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097829 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097908 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097962 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.098034 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.098097 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.098145 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.098190 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.098253 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.098317 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.098377 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.098441 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.098491 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.098551 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.098608 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.098663 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.098723 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.098785 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.098845 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.098949 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.099016 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.099079 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.099141 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.099203 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.099301 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.100514 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.100596 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.101659 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.101737 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.101796 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.101861 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.101954 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.102023 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.102083 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.102138 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.102191 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.102250 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.102312 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.102371 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.102429 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.102485 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.102552 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.102617 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.102676 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.102733 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.102797 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.102853 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.102956 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.103019 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.103070 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.103121 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.103191 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.103255 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.103312 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.103372 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.103426 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.103486 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.103538 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.103603 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.103667 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.103720 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.103775 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.103834 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.103991 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.104065 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.104121 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.104289 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.104367 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.104406 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.104442 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.104481 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.104515 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.104550 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.104593 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.104645 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.104689 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.104726 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.104760 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.104795 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.104829 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.104863 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.104940 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.104977 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105014 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105049 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105085 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105119 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105155 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105193 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105228 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105268 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105305 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105341 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105377 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105412 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105446 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105498 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105533 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105599 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105675 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105726 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105765 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105805 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105847 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105929 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.105985 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.106042 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.106099 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.106160 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.106220 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.106273 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.106322 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.106368 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.106498 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.106531 4827 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.106554 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.106577 4827 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.106598 4827 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.106618 4827 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.106638 4827 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.106658 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.113089 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.115516 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.119496 4827 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.123250 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.129599 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093120 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093333 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093358 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.093348 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.094710 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.095717 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.095955 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.096050 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.096114 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097132 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.092794 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097306 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097360 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.097965 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.098173 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.098256 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.098254 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.098066 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.099030 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.099964 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.100031 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.100126 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.100407 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.100776 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.100699 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.101226 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.101253 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.101139 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.101332 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.101541 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.101786 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.102221 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.102536 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.102560 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.103421 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.103673 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.103795 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.104409 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.106446 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.106791 4827 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.106817 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.107144 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.107333 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.107359 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.107613 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.108346 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.108413 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.108529 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.108800 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.108825 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.108428 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.110868 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.111077 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.111208 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.111766 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.111843 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.111948 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.112079 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.112809 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.113519 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.113525 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.113774 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.114694 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.114733 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.114987 4827 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.115334 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.115746 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.115767 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.116067 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.116117 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.137870 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:08.637801707 +0000 UTC m=+21.324882196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.116166 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.116156 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.116387 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.116530 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.117487 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.117792 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.118951 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.119322 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.119363 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.119687 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.119771 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.119950 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.122933 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.123238 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.123489 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.125475 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.126870 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.126783 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.127480 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.127559 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.127654 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.128056 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.128271 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.128818 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.129067 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.129114 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.129617 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.130044 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.130182 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.130463 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.130723 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.130517 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.131317 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.131598 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.131719 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.132105 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.132323 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.132419 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.132665 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.132872 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.133242 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.134041 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.134385 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.134567 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.134968 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.135208 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.135439 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.135594 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.135788 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.136332 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.136453 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.136633 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.136859 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.137146 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.138040 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.138160 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.138183 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.138532 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.138564 4827 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.138610 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.138980 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.139485 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.139812 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.140422 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.140485 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.140559 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:08.640529868 +0000 UTC m=+21.327610337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.141159 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.141367 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.142024 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.142240 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.142423 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.142837 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.142966 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.143210 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.143253 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.143316 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.143466 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:08.643440385 +0000 UTC m=+21.330520844 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.143969 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.144411 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.145416 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:47:08.645397223 +0000 UTC m=+21.332477892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.145838 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.145912 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.146167 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.146606 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.146639 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.147053 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.147774 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.147853 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.148218 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.148288 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.152384 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.152415 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.152437 4827 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.152734 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:08.652522837 +0000 UTC m=+21.339603516 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.156423 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.156477 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.157625 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.157909 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.163431 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.163586 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.163800 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.163799 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.164303 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.164385 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.164797 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.164967 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.165155 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.165379 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.165415 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.165384 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.165474 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.165507 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.165926 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.165979 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.168082 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.168682 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.171084 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.171648 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.172062 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.172390 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.174705 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.174815 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.177691 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.178604 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.180373 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.181630 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.182529 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.182539 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.183232 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.182813 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.182614 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.182925 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.183539 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.196113 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.200299 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.207244 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.207398 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.207570 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.207648 4827 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.207693 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.207819 4827 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208012 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208140 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208207 4827 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208312 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208341 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208363 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208380 4827 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208398 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208413 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208428 4827 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208442 4827 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208456 4827 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208473 4827 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208486 4827 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208499 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208512 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208527 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208541 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208556 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208569 4827 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208579 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208588 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208598 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208609 4827 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208622 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208634 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208650 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208664 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208675 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208686 4827 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208699 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208712 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208724 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208737 4827 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208749 4827 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208783 4827 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208797 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208813 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208828 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208841 4827 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208852 4827 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208865 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208895 4827 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208909 4827 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208979 4827 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.208995 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209007 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209018 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209030 4827 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209042 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209059 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209073 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209086 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209101 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209115 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209129 4827 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209314 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209599 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209620 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209638 4827 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209651 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209664 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209717 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209731 4827 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209744 4827 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209756 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209783 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209800 4827 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209813 4827 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209827 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209841 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209864 4827 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209896 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209942 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.209959 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210002 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210018 4827 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210031 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210051 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210064 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210076 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210089 4827 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210101 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210113 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210126 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210138 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210151 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210166 4827 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210184 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210197 4827 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210209 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210221 4827 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210233 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210245 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210265 4827 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210282 4827 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210316 4827 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210329 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210343 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210356 4827 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210375 4827 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210388 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210400 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210413 4827 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210425 4827 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210438 4827 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210450 4827 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210465 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210477 4827 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210490 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210528 4827 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210541 4827 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210553 4827 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210566 4827 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210579 4827 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210593 4827 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210605 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210617 4827 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210629 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210641 4827 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210660 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210672 4827 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210685 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210698 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210709 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210720 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210738 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210751 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210764 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210777 4827 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.210788 4827 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211051 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211070 4827 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211118 4827 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211134 4827 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211146 4827 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211157 4827 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211168 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211181 4827 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211193 4827 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211210 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211221 4827 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211240 4827 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211255 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211268 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211282 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211293 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211320 4827 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211342 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211354 4827 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211366 4827 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211380 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211393 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211405 4827 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211418 4827 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211431 4827 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211444 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211456 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211471 4827 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211485 4827 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211497 4827 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211509 4827 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211522 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211534 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211547 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211559 4827 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211571 4827 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211583 4827 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211594 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211605 4827 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211618 4827 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211630 4827 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211642 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211654 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211676 4827 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211695 4827 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211708 4827 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211722 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211735 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211750 4827 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211763 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211775 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.211791 4827 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.213303 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.213924 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.220637 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.233655 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.245849 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.254453 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.258247 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.269153 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.280257 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.312773 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.312822 4827 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.360611 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.378495 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.384545 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 03:47:08 crc kubenswrapper[4827]: W0131 03:47:08.402960 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-e053b46eb320ea3f981f99944819e4fe72e9791ae14fa66b6fcf58eb4695226e WatchSource:0}: Error finding container e053b46eb320ea3f981f99944819e4fe72e9791ae14fa66b6fcf58eb4695226e: Status 404 returned error can't find the container with id e053b46eb320ea3f981f99944819e4fe72e9791ae14fa66b6fcf58eb4695226e Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.715869 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.715968 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.715994 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.716021 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:47:09.715991688 +0000 UTC m=+22.403072137 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.716064 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.716092 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.716109 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.716110 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.716121 4827 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.716171 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:09.716157783 +0000 UTC m=+22.403238222 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.716235 4827 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.716286 4827 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.716321 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:09.716314968 +0000 UTC m=+22.403395417 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.716370 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:09.716332398 +0000 UTC m=+22.403412847 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.716262 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.716411 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.716429 4827 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:08 crc kubenswrapper[4827]: E0131 03:47:08.716468 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:09.716462052 +0000 UTC m=+22.403542501 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.742220 4827 csr.go:261] certificate signing request csr-c9p95 is approved, waiting to be issued Jan 31 03:47:08 crc kubenswrapper[4827]: I0131 03:47:08.778902 4827 csr.go:257] certificate signing request csr-c9p95 is issued Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.061711 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 15:06:42.415165147 +0000 UTC Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.109130 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:09 crc kubenswrapper[4827]: E0131 03:47:09.109593 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.243916 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5a1af1022f0737e14f718c12e7787d7c89e1ed2709b57c1690dbb1d421ac8868"} Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.246162 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d"} Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.246322 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff"} Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.246426 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2766a14c5407ebc7e5afdfab7de00596d21f424258fbcb80751cb9b7ad915435"} Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.247471 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b"} Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.247527 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e053b46eb320ea3f981f99944819e4fe72e9791ae14fa66b6fcf58eb4695226e"} Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.273602 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.296074 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.308113 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.323723 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.335921 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.351928 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.367535 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.387670 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.403783 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.417323 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.433928 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.448371 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.467637 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.515282 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.658926 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-jxh94"] Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.659435 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-gjc5t"] Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.659568 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.660275 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-q9q8q"] Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.660407 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hj2zw"] Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.663485 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-w7v8l"] Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.663803 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.664201 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.664502 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-w7v8l" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.666452 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.666773 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.666801 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: W0131 03:47:09.667092 4827 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 31 03:47:09 crc kubenswrapper[4827]: W0131 03:47:09.667124 4827 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 31 03:47:09 crc kubenswrapper[4827]: E0131 03:47:09.667151 4827 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:47:09 crc kubenswrapper[4827]: E0131 03:47:09.667124 4827 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.667167 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.667196 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.668546 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.672288 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.672354 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.672440 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.672529 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 03:47:09 crc kubenswrapper[4827]: W0131 03:47:09.672552 4827 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 31 03:47:09 crc kubenswrapper[4827]: E0131 03:47:09.672585 4827 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.672599 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.672744 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.672755 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.672989 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.674918 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.674981 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.675149 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.675188 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.675393 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.675410 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.688402 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.704235 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.719597 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.726899 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727001 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b5dbff7a-4ed0-4c17-bd01-1888199225b3-system-cni-dir\") pod \"multus-additional-cni-plugins-gjc5t\" (UID: \"b5dbff7a-4ed0-4c17-bd01-1888199225b3\") " pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727030 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-run-openvswitch\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727048 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: E0131 03:47:09.727088 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:47:11.727045907 +0000 UTC m=+24.414126346 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727130 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-system-cni-dir\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727159 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-cnibin\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727177 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-host-var-lib-cni-bin\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727195 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-run-ovn-kubernetes\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727212 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-multus-conf-dir\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727227 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckwx4\" (UniqueName: \"kubernetes.io/projected/a696063c-4553-4032-8038-9900f09d4031-kube-api-access-ckwx4\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727243 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b5dbff7a-4ed0-4c17-bd01-1888199225b3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gjc5t\" (UID: \"b5dbff7a-4ed0-4c17-bd01-1888199225b3\") " pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727257 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ch9g\" (UniqueName: \"kubernetes.io/projected/c10db775-d306-4f15-97dd-b1dfed7c89e5-kube-api-access-4ch9g\") pod \"node-resolver-w7v8l\" (UID: \"c10db775-d306-4f15-97dd-b1dfed7c89e5\") " pod="openshift-dns/node-resolver-w7v8l" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727280 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-os-release\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727302 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-etc-kubernetes\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727324 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-slash\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727365 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727387 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b5dbff7a-4ed0-4c17-bd01-1888199225b3-os-release\") pod \"multus-additional-cni-plugins-gjc5t\" (UID: \"b5dbff7a-4ed0-4c17-bd01-1888199225b3\") " pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727403 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e63dbb73-e1a2-4796-83c5-2a88e55566b5-rootfs\") pod \"machine-config-daemon-jxh94\" (UID: \"e63dbb73-e1a2-4796-83c5-2a88e55566b5\") " pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727480 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727502 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:09 crc kubenswrapper[4827]: E0131 03:47:09.727509 4827 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727520 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c10db775-d306-4f15-97dd-b1dfed7c89e5-hosts-file\") pod \"node-resolver-w7v8l\" (UID: \"c10db775-d306-4f15-97dd-b1dfed7c89e5\") " pod="openshift-dns/node-resolver-w7v8l" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727541 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a696063c-4553-4032-8038-9900f09d4031-cni-binary-copy\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: E0131 03:47:09.727546 4827 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:47:09 crc kubenswrapper[4827]: E0131 03:47:09.727557 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:11.727542822 +0000 UTC m=+24.414623271 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727636 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-host-var-lib-cni-multus\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727657 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-host-var-lib-kubelet\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: E0131 03:47:09.727670 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:47:09 crc kubenswrapper[4827]: E0131 03:47:09.727724 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:47:09 crc kubenswrapper[4827]: E0131 03:47:09.727752 4827 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:09 crc kubenswrapper[4827]: E0131 03:47:09.727694 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:11.727677736 +0000 UTC m=+24.414758185 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:47:09 crc kubenswrapper[4827]: E0131 03:47:09.727786 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:11.727779109 +0000 UTC m=+24.414859558 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727789 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-run-ovn\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727815 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-cni-bin\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727837 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-systemd-units\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727854 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-run-systemd\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727873 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-etc-openvswitch\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727918 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-var-lib-openvswitch\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727937 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-log-socket\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727952 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/da9e7773-a24b-4e8d-b479-97e2594db0d4-ovnkube-config\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.727965 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da9e7773-a24b-4e8d-b479-97e2594db0d4-env-overrides\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.728028 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b5dbff7a-4ed0-4c17-bd01-1888199225b3-cnibin\") pod \"multus-additional-cni-plugins-gjc5t\" (UID: \"b5dbff7a-4ed0-4c17-bd01-1888199225b3\") " pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.728056 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-host-run-multus-certs\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.728852 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-cni-netd\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.728899 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/da9e7773-a24b-4e8d-b479-97e2594db0d4-ovnkube-script-lib\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.728925 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs4bx\" (UniqueName: \"kubernetes.io/projected/b5dbff7a-4ed0-4c17-bd01-1888199225b3-kube-api-access-fs4bx\") pod \"multus-additional-cni-plugins-gjc5t\" (UID: \"b5dbff7a-4ed0-4c17-bd01-1888199225b3\") " pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.728944 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-multus-socket-dir-parent\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.728961 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-host-run-k8s-cni-cncf-io\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.728977 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt4z5\" (UniqueName: \"kubernetes.io/projected/da9e7773-a24b-4e8d-b479-97e2594db0d4-kube-api-access-mt4z5\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.728994 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e63dbb73-e1a2-4796-83c5-2a88e55566b5-proxy-tls\") pod \"machine-config-daemon-jxh94\" (UID: \"e63dbb73-e1a2-4796-83c5-2a88e55566b5\") " pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.729010 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-hostroot\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.729025 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-run-netns\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.729040 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-node-log\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.729056 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94gkn\" (UniqueName: \"kubernetes.io/projected/e63dbb73-e1a2-4796-83c5-2a88e55566b5-kube-api-access-94gkn\") pod \"machine-config-daemon-jxh94\" (UID: \"e63dbb73-e1a2-4796-83c5-2a88e55566b5\") " pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.729074 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b5dbff7a-4ed0-4c17-bd01-1888199225b3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gjc5t\" (UID: \"b5dbff7a-4ed0-4c17-bd01-1888199225b3\") " pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.729091 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a696063c-4553-4032-8038-9900f09d4031-multus-daemon-config\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.729106 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da9e7773-a24b-4e8d-b479-97e2594db0d4-ovn-node-metrics-cert\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.729131 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.729149 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b5dbff7a-4ed0-4c17-bd01-1888199225b3-cni-binary-copy\") pod \"multus-additional-cni-plugins-gjc5t\" (UID: \"b5dbff7a-4ed0-4c17-bd01-1888199225b3\") " pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.729163 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-multus-cni-dir\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.729177 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-host-run-netns\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.729192 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-kubelet\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.729207 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e63dbb73-e1a2-4796-83c5-2a88e55566b5-mcd-auth-proxy-config\") pod \"machine-config-daemon-jxh94\" (UID: \"e63dbb73-e1a2-4796-83c5-2a88e55566b5\") " pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 03:47:09 crc kubenswrapper[4827]: E0131 03:47:09.729403 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:47:09 crc kubenswrapper[4827]: E0131 03:47:09.729415 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:47:09 crc kubenswrapper[4827]: E0131 03:47:09.729423 4827 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:09 crc kubenswrapper[4827]: E0131 03:47:09.729449 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:11.729440969 +0000 UTC m=+24.416521418 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.735346 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.748020 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.762988 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.775828 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.779782 4827 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-31 03:42:08 +0000 UTC, rotation deadline is 2026-11-22 03:42:44.5687241 +0000 UTC Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.779810 4827 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7079h55m34.788916956s for next certificate rotation Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.789234 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.807776 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.825298 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829545 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94gkn\" (UniqueName: \"kubernetes.io/projected/e63dbb73-e1a2-4796-83c5-2a88e55566b5-kube-api-access-94gkn\") pod \"machine-config-daemon-jxh94\" (UID: \"e63dbb73-e1a2-4796-83c5-2a88e55566b5\") " pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829588 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-hostroot\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829609 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-run-netns\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829626 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-node-log\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829642 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da9e7773-a24b-4e8d-b479-97e2594db0d4-ovn-node-metrics-cert\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829660 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b5dbff7a-4ed0-4c17-bd01-1888199225b3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gjc5t\" (UID: \"b5dbff7a-4ed0-4c17-bd01-1888199225b3\") " pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829677 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a696063c-4553-4032-8038-9900f09d4031-multus-daemon-config\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829713 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-host-run-netns\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829731 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-kubelet\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829746 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e63dbb73-e1a2-4796-83c5-2a88e55566b5-mcd-auth-proxy-config\") pod \"machine-config-daemon-jxh94\" (UID: \"e63dbb73-e1a2-4796-83c5-2a88e55566b5\") " pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829769 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b5dbff7a-4ed0-4c17-bd01-1888199225b3-cni-binary-copy\") pod \"multus-additional-cni-plugins-gjc5t\" (UID: \"b5dbff7a-4ed0-4c17-bd01-1888199225b3\") " pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829787 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-multus-cni-dir\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829803 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b5dbff7a-4ed0-4c17-bd01-1888199225b3-system-cni-dir\") pod \"multus-additional-cni-plugins-gjc5t\" (UID: \"b5dbff7a-4ed0-4c17-bd01-1888199225b3\") " pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829816 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-run-openvswitch\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829831 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829850 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-system-cni-dir\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829864 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-cnibin\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829896 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-host-var-lib-cni-bin\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829913 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-run-ovn-kubernetes\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829930 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-multus-conf-dir\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829945 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckwx4\" (UniqueName: \"kubernetes.io/projected/a696063c-4553-4032-8038-9900f09d4031-kube-api-access-ckwx4\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829961 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-etc-kubernetes\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829976 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-slash\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.829994 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b5dbff7a-4ed0-4c17-bd01-1888199225b3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gjc5t\" (UID: \"b5dbff7a-4ed0-4c17-bd01-1888199225b3\") " pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830014 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ch9g\" (UniqueName: \"kubernetes.io/projected/c10db775-d306-4f15-97dd-b1dfed7c89e5-kube-api-access-4ch9g\") pod \"node-resolver-w7v8l\" (UID: \"c10db775-d306-4f15-97dd-b1dfed7c89e5\") " pod="openshift-dns/node-resolver-w7v8l" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830031 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-os-release\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830055 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b5dbff7a-4ed0-4c17-bd01-1888199225b3-os-release\") pod \"multus-additional-cni-plugins-gjc5t\" (UID: \"b5dbff7a-4ed0-4c17-bd01-1888199225b3\") " pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830070 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e63dbb73-e1a2-4796-83c5-2a88e55566b5-rootfs\") pod \"machine-config-daemon-jxh94\" (UID: \"e63dbb73-e1a2-4796-83c5-2a88e55566b5\") " pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830093 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c10db775-d306-4f15-97dd-b1dfed7c89e5-hosts-file\") pod \"node-resolver-w7v8l\" (UID: \"c10db775-d306-4f15-97dd-b1dfed7c89e5\") " pod="openshift-dns/node-resolver-w7v8l" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830108 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a696063c-4553-4032-8038-9900f09d4031-cni-binary-copy\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830127 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-host-var-lib-cni-multus\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830178 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-host-var-lib-kubelet\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830195 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-run-ovn\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830211 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-cni-bin\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830241 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-systemd-units\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830256 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-run-systemd\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830272 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-etc-openvswitch\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830276 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-system-cni-dir\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830289 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/da9e7773-a24b-4e8d-b479-97e2594db0d4-ovnkube-config\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830348 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-var-lib-openvswitch\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830371 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-log-socket\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830394 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da9e7773-a24b-4e8d-b479-97e2594db0d4-env-overrides\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830416 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-cni-netd\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830440 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/da9e7773-a24b-4e8d-b479-97e2594db0d4-ovnkube-script-lib\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830464 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b5dbff7a-4ed0-4c17-bd01-1888199225b3-cnibin\") pod \"multus-additional-cni-plugins-gjc5t\" (UID: \"b5dbff7a-4ed0-4c17-bd01-1888199225b3\") " pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830490 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-host-run-multus-certs\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830513 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt4z5\" (UniqueName: \"kubernetes.io/projected/da9e7773-a24b-4e8d-b479-97e2594db0d4-kube-api-access-mt4z5\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830538 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e63dbb73-e1a2-4796-83c5-2a88e55566b5-proxy-tls\") pod \"machine-config-daemon-jxh94\" (UID: \"e63dbb73-e1a2-4796-83c5-2a88e55566b5\") " pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830560 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs4bx\" (UniqueName: \"kubernetes.io/projected/b5dbff7a-4ed0-4c17-bd01-1888199225b3-kube-api-access-fs4bx\") pod \"multus-additional-cni-plugins-gjc5t\" (UID: \"b5dbff7a-4ed0-4c17-bd01-1888199225b3\") " pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830583 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-multus-socket-dir-parent\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830603 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-host-run-k8s-cni-cncf-io\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830677 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-host-run-k8s-cni-cncf-io\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830911 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/da9e7773-a24b-4e8d-b479-97e2594db0d4-ovnkube-config\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830937 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-hostroot\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830972 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-cnibin\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.830975 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-run-netns\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.831000 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-host-var-lib-cni-bin\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.831009 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-node-log\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.831028 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-run-ovn-kubernetes\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.831053 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-multus-conf-dir\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.831266 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-etc-kubernetes\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.831296 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-slash\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.831403 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-host-var-lib-cni-multus\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.831440 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e63dbb73-e1a2-4796-83c5-2a88e55566b5-rootfs\") pod \"machine-config-daemon-jxh94\" (UID: \"e63dbb73-e1a2-4796-83c5-2a88e55566b5\") " pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.831463 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-cni-bin\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.831478 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c10db775-d306-4f15-97dd-b1dfed7c89e5-hosts-file\") pod \"node-resolver-w7v8l\" (UID: \"c10db775-d306-4f15-97dd-b1dfed7c89e5\") " pod="openshift-dns/node-resolver-w7v8l" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.831508 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-host-var-lib-kubelet\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.831536 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-run-ovn\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.831557 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-var-lib-openvswitch\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.831569 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b5dbff7a-4ed0-4c17-bd01-1888199225b3-os-release\") pod \"multus-additional-cni-plugins-gjc5t\" (UID: \"b5dbff7a-4ed0-4c17-bd01-1888199225b3\") " pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.831814 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-os-release\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.831826 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b5dbff7a-4ed0-4c17-bd01-1888199225b3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gjc5t\" (UID: \"b5dbff7a-4ed0-4c17-bd01-1888199225b3\") " pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.831856 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-run-systemd\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.831898 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-etc-openvswitch\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.831899 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-log-socket\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.831932 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-host-run-multus-certs\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.831956 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-cni-netd\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.832051 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a696063c-4553-4032-8038-9900f09d4031-cni-binary-copy\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.832065 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a696063c-4553-4032-8038-9900f09d4031-multus-daemon-config\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.832122 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-multus-cni-dir\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.832147 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b5dbff7a-4ed0-4c17-bd01-1888199225b3-system-cni-dir\") pod \"multus-additional-cni-plugins-gjc5t\" (UID: \"b5dbff7a-4ed0-4c17-bd01-1888199225b3\") " pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.832170 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-run-openvswitch\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.832192 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.832219 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-kubelet\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.832240 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-host-run-netns\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.832404 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da9e7773-a24b-4e8d-b479-97e2594db0d4-env-overrides\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.832491 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b5dbff7a-4ed0-4c17-bd01-1888199225b3-cni-binary-copy\") pod \"multus-additional-cni-plugins-gjc5t\" (UID: \"b5dbff7a-4ed0-4c17-bd01-1888199225b3\") " pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.832531 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/da9e7773-a24b-4e8d-b479-97e2594db0d4-ovnkube-script-lib\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.832568 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b5dbff7a-4ed0-4c17-bd01-1888199225b3-cnibin\") pod \"multus-additional-cni-plugins-gjc5t\" (UID: \"b5dbff7a-4ed0-4c17-bd01-1888199225b3\") " pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.832605 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a696063c-4553-4032-8038-9900f09d4031-multus-socket-dir-parent\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.832734 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e63dbb73-e1a2-4796-83c5-2a88e55566b5-mcd-auth-proxy-config\") pod \"machine-config-daemon-jxh94\" (UID: \"e63dbb73-e1a2-4796-83c5-2a88e55566b5\") " pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.832796 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-systemd-units\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.833259 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b5dbff7a-4ed0-4c17-bd01-1888199225b3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gjc5t\" (UID: \"b5dbff7a-4ed0-4c17-bd01-1888199225b3\") " pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.837760 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e63dbb73-e1a2-4796-83c5-2a88e55566b5-proxy-tls\") pod \"machine-config-daemon-jxh94\" (UID: \"e63dbb73-e1a2-4796-83c5-2a88e55566b5\") " pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.843927 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.853171 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckwx4\" (UniqueName: \"kubernetes.io/projected/a696063c-4553-4032-8038-9900f09d4031-kube-api-access-ckwx4\") pod \"multus-q9q8q\" (UID: \"a696063c-4553-4032-8038-9900f09d4031\") " pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.858573 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs4bx\" (UniqueName: \"kubernetes.io/projected/b5dbff7a-4ed0-4c17-bd01-1888199225b3-kube-api-access-fs4bx\") pod \"multus-additional-cni-plugins-gjc5t\" (UID: \"b5dbff7a-4ed0-4c17-bd01-1888199225b3\") " pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.860975 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94gkn\" (UniqueName: \"kubernetes.io/projected/e63dbb73-e1a2-4796-83c5-2a88e55566b5-kube-api-access-94gkn\") pod \"machine-config-daemon-jxh94\" (UID: \"e63dbb73-e1a2-4796-83c5-2a88e55566b5\") " pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.869756 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.896645 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.910748 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.927849 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.946141 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.967948 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.980149 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.983698 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.987121 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q9q8q" Jan 31 03:47:09 crc kubenswrapper[4827]: I0131 03:47:09.996986 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: W0131 03:47:10.000038 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda696063c_4553_4032_8038_9900f09d4031.slice/crio-db57966dac8b1078825f3a7379aee8fab5f821822564eda2ed98872e4d5a577b WatchSource:0}: Error finding container db57966dac8b1078825f3a7379aee8fab5f821822564eda2ed98872e4d5a577b: Status 404 returned error can't find the container with id db57966dac8b1078825f3a7379aee8fab5f821822564eda2ed98872e4d5a577b Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.004940 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.012367 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: W0131 03:47:10.036593 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5dbff7a_4ed0_4c17_bd01_1888199225b3.slice/crio-5e0f741f76306917c617c9a5d9af03af1403211a6ba0e14c20c29202cba88334 WatchSource:0}: Error finding container 5e0f741f76306917c617c9a5d9af03af1403211a6ba0e14c20c29202cba88334: Status 404 returned error can't find the container with id 5e0f741f76306917c617c9a5d9af03af1403211a6ba0e14c20c29202cba88334 Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.062017 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 20:08:16.025746649 +0000 UTC Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.109385 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:10 crc kubenswrapper[4827]: E0131 03:47:10.109501 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.109558 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:10 crc kubenswrapper[4827]: E0131 03:47:10.109705 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.116129 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.116749 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.118525 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.119232 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.120838 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.121422 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.122359 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.123600 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.124529 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.126863 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.127468 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.128610 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.129188 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.130500 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.132262 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.133302 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.134453 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.134930 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.135513 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.136551 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.137086 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.138323 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.138925 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.141328 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.142643 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.143495 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.144961 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.145524 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.146675 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.148589 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.149232 4827 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.150027 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.153116 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.153978 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.156102 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.158753 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.159936 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.160504 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.161626 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.162338 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.163231 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.163838 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.164986 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.165997 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.166487 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.167514 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.168096 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.169246 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.169726 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.170240 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.171104 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.171627 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.172591 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.173117 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.251764 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q9q8q" event={"ID":"a696063c-4553-4032-8038-9900f09d4031","Type":"ContainerStarted","Data":"3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205"} Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.251821 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q9q8q" event={"ID":"a696063c-4553-4032-8038-9900f09d4031","Type":"ContainerStarted","Data":"db57966dac8b1078825f3a7379aee8fab5f821822564eda2ed98872e4d5a577b"} Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.253211 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" event={"ID":"b5dbff7a-4ed0-4c17-bd01-1888199225b3","Type":"ContainerStarted","Data":"c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc"} Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.253258 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" event={"ID":"b5dbff7a-4ed0-4c17-bd01-1888199225b3","Type":"ContainerStarted","Data":"5e0f741f76306917c617c9a5d9af03af1403211a6ba0e14c20c29202cba88334"} Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.255516 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232"} Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.255552 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50"} Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.255566 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"7518aeab2bf36c273d40ea4ae918c24429af59e6f8b647409455e98c9538648a"} Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.286254 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.325194 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.342611 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.357093 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.370343 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.389625 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.404228 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.417118 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.431715 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.447867 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.462341 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.485069 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.500245 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.518893 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.532286 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.549106 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.563522 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.566129 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.578105 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ch9g\" (UniqueName: \"kubernetes.io/projected/c10db775-d306-4f15-97dd-b1dfed7c89e5-kube-api-access-4ch9g\") pod \"node-resolver-w7v8l\" (UID: \"c10db775-d306-4f15-97dd-b1dfed7c89e5\") " pod="openshift-dns/node-resolver-w7v8l" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.580903 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.599104 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-w7v8l" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.605279 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.614950 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 03:47:10 crc kubenswrapper[4827]: W0131 03:47:10.615402 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc10db775_d306_4f15_97dd_b1dfed7c89e5.slice/crio-b0aee941786b3917b71b7ba1b8956aabd16f6ba844cdedeb250827bfd0344f27 WatchSource:0}: Error finding container b0aee941786b3917b71b7ba1b8956aabd16f6ba844cdedeb250827bfd0344f27: Status 404 returned error can't find the container with id b0aee941786b3917b71b7ba1b8956aabd16f6ba844cdedeb250827bfd0344f27 Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.621079 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt4z5\" (UniqueName: \"kubernetes.io/projected/da9e7773-a24b-4e8d-b479-97e2594db0d4-kube-api-access-mt4z5\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.632787 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.661853 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.675837 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.695867 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: I0131 03:47:10.725241 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:10 crc kubenswrapper[4827]: E0131 03:47:10.831695 4827 secret.go:188] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: failed to sync secret cache: timed out waiting for the condition Jan 31 03:47:10 crc kubenswrapper[4827]: E0131 03:47:10.831803 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da9e7773-a24b-4e8d-b479-97e2594db0d4-ovn-node-metrics-cert podName:da9e7773-a24b-4e8d-b479-97e2594db0d4 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:11.331775467 +0000 UTC m=+24.018855916 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/da9e7773-a24b-4e8d-b479-97e2594db0d4-ovn-node-metrics-cert") pod "ovnkube-node-hj2zw" (UID: "da9e7773-a24b-4e8d-b479-97e2594db0d4") : failed to sync secret cache: timed out waiting for the condition Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.064112 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 12:11:34.379141123 +0000 UTC Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.095558 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.109215 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:11 crc kubenswrapper[4827]: E0131 03:47:11.109349 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.262541 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-w7v8l" event={"ID":"c10db775-d306-4f15-97dd-b1dfed7c89e5","Type":"ContainerStarted","Data":"cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185"} Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.262614 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-w7v8l" event={"ID":"c10db775-d306-4f15-97dd-b1dfed7c89e5","Type":"ContainerStarted","Data":"b0aee941786b3917b71b7ba1b8956aabd16f6ba844cdedeb250827bfd0344f27"} Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.265157 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a"} Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.267739 4827 generic.go:334] "Generic (PLEG): container finished" podID="b5dbff7a-4ed0-4c17-bd01-1888199225b3" containerID="c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc" exitCode=0 Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.267788 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" event={"ID":"b5dbff7a-4ed0-4c17-bd01-1888199225b3","Type":"ContainerDied","Data":"c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc"} Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.279385 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.306013 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.318663 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.337542 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.349524 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da9e7773-a24b-4e8d-b479-97e2594db0d4-ovn-node-metrics-cert\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.353268 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da9e7773-a24b-4e8d-b479-97e2594db0d4-ovn-node-metrics-cert\") pod \"ovnkube-node-hj2zw\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.356451 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.371124 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.387306 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.408279 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.423198 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.438027 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.451900 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.464531 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.478240 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.491616 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.492918 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:11 crc kubenswrapper[4827]: W0131 03:47:11.509071 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda9e7773_a24b_4e8d_b479_97e2594db0d4.slice/crio-0c444b232675ca762b9f3eab59ec84cb7f6dfaa929886cbc5df072a80133ff73 WatchSource:0}: Error finding container 0c444b232675ca762b9f3eab59ec84cb7f6dfaa929886cbc5df072a80133ff73: Status 404 returned error can't find the container with id 0c444b232675ca762b9f3eab59ec84cb7f6dfaa929886cbc5df072a80133ff73 Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.521941 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.539384 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.553115 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.571468 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.580159 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-cl9c5"] Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.580717 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cl9c5" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.582710 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.584046 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.584289 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.584476 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.587331 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.604172 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.617814 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.630655 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.644256 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.653769 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/77746487-d08f-4da6-82a3-bc7d8845841a-serviceca\") pod \"node-ca-cl9c5\" (UID: \"77746487-d08f-4da6-82a3-bc7d8845841a\") " pod="openshift-image-registry/node-ca-cl9c5" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.653845 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xwfq\" (UniqueName: \"kubernetes.io/projected/77746487-d08f-4da6-82a3-bc7d8845841a-kube-api-access-5xwfq\") pod \"node-ca-cl9c5\" (UID: \"77746487-d08f-4da6-82a3-bc7d8845841a\") " pod="openshift-image-registry/node-ca-cl9c5" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.653876 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77746487-d08f-4da6-82a3-bc7d8845841a-host\") pod \"node-ca-cl9c5\" (UID: \"77746487-d08f-4da6-82a3-bc7d8845841a\") " pod="openshift-image-registry/node-ca-cl9c5" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.664756 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.678058 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.695328 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.710483 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.728364 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.753096 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.754394 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:47:11 crc kubenswrapper[4827]: E0131 03:47:11.754608 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:47:15.754576358 +0000 UTC m=+28.441656817 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.754693 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.754760 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.754804 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.754850 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/77746487-d08f-4da6-82a3-bc7d8845841a-serviceca\") pod \"node-ca-cl9c5\" (UID: \"77746487-d08f-4da6-82a3-bc7d8845841a\") " pod="openshift-image-registry/node-ca-cl9c5" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.754952 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xwfq\" (UniqueName: \"kubernetes.io/projected/77746487-d08f-4da6-82a3-bc7d8845841a-kube-api-access-5xwfq\") pod \"node-ca-cl9c5\" (UID: \"77746487-d08f-4da6-82a3-bc7d8845841a\") " pod="openshift-image-registry/node-ca-cl9c5" Jan 31 03:47:11 crc kubenswrapper[4827]: E0131 03:47:11.754953 4827 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.755004 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77746487-d08f-4da6-82a3-bc7d8845841a-host\") pod \"node-ca-cl9c5\" (UID: \"77746487-d08f-4da6-82a3-bc7d8845841a\") " pod="openshift-image-registry/node-ca-cl9c5" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.755037 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:11 crc kubenswrapper[4827]: E0131 03:47:11.755060 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:15.755032192 +0000 UTC m=+28.442112681 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:47:11 crc kubenswrapper[4827]: E0131 03:47:11.755285 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:47:11 crc kubenswrapper[4827]: E0131 03:47:11.755750 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:47:11 crc kubenswrapper[4827]: E0131 03:47:11.755768 4827 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:11 crc kubenswrapper[4827]: E0131 03:47:11.755823 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:15.755811176 +0000 UTC m=+28.442891635 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:11 crc kubenswrapper[4827]: E0131 03:47:11.755948 4827 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.755944 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77746487-d08f-4da6-82a3-bc7d8845841a-host\") pod \"node-ca-cl9c5\" (UID: \"77746487-d08f-4da6-82a3-bc7d8845841a\") " pod="openshift-image-registry/node-ca-cl9c5" Jan 31 03:47:11 crc kubenswrapper[4827]: E0131 03:47:11.755994 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:15.755982531 +0000 UTC m=+28.443063000 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:47:11 crc kubenswrapper[4827]: E0131 03:47:11.756055 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:47:11 crc kubenswrapper[4827]: E0131 03:47:11.756082 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:47:11 crc kubenswrapper[4827]: E0131 03:47:11.756096 4827 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:11 crc kubenswrapper[4827]: E0131 03:47:11.756166 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:15.756141076 +0000 UTC m=+28.443221525 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.758243 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/77746487-d08f-4da6-82a3-bc7d8845841a-serviceca\") pod \"node-ca-cl9c5\" (UID: \"77746487-d08f-4da6-82a3-bc7d8845841a\") " pod="openshift-image-registry/node-ca-cl9c5" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.770536 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.785512 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xwfq\" (UniqueName: \"kubernetes.io/projected/77746487-d08f-4da6-82a3-bc7d8845841a-kube-api-access-5xwfq\") pod \"node-ca-cl9c5\" (UID: \"77746487-d08f-4da6-82a3-bc7d8845841a\") " pod="openshift-image-registry/node-ca-cl9c5" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.792220 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.811259 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.831506 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.849405 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.870275 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.886530 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:11 crc kubenswrapper[4827]: I0131 03:47:11.899632 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.065338 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 15:44:04.728843418 +0000 UTC Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.068089 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cl9c5" Jan 31 03:47:12 crc kubenswrapper[4827]: W0131 03:47:12.088114 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77746487_d08f_4da6_82a3_bc7d8845841a.slice/crio-3d887971d87d36b8812a0d0894dd601a2c094fa54b65e67807552baf320f1599 WatchSource:0}: Error finding container 3d887971d87d36b8812a0d0894dd601a2c094fa54b65e67807552baf320f1599: Status 404 returned error can't find the container with id 3d887971d87d36b8812a0d0894dd601a2c094fa54b65e67807552baf320f1599 Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.109735 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.109906 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:12 crc kubenswrapper[4827]: E0131 03:47:12.109974 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:12 crc kubenswrapper[4827]: E0131 03:47:12.110087 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.274558 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cl9c5" event={"ID":"77746487-d08f-4da6-82a3-bc7d8845841a","Type":"ContainerStarted","Data":"3d887971d87d36b8812a0d0894dd601a2c094fa54b65e67807552baf320f1599"} Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.279027 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" event={"ID":"b5dbff7a-4ed0-4c17-bd01-1888199225b3","Type":"ContainerStarted","Data":"c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a"} Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.289747 4827 generic.go:334] "Generic (PLEG): container finished" podID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerID="96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c" exitCode=0 Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.289841 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerDied","Data":"96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c"} Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.289934 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerStarted","Data":"0c444b232675ca762b9f3eab59ec84cb7f6dfaa929886cbc5df072a80133ff73"} Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.298590 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.319506 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.351331 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.389293 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.410094 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.442107 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.462721 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.481717 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.493602 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.507396 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.518037 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.530913 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.544516 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.554635 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.568485 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.583808 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.599428 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.612075 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.625541 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.636044 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.648249 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.668752 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.681555 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.720694 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.764168 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:12 crc kubenswrapper[4827]: I0131 03:47:12.799853 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.067526 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 00:10:23.135014196 +0000 UTC Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.109273 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:13 crc kubenswrapper[4827]: E0131 03:47:13.109460 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.297143 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerStarted","Data":"70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6"} Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.297450 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerStarted","Data":"32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48"} Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.297464 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerStarted","Data":"2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753"} Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.297477 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerStarted","Data":"bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918"} Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.297488 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerStarted","Data":"5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44"} Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.297500 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerStarted","Data":"ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f"} Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.299762 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cl9c5" event={"ID":"77746487-d08f-4da6-82a3-bc7d8845841a","Type":"ContainerStarted","Data":"5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4"} Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.302499 4827 generic.go:334] "Generic (PLEG): container finished" podID="b5dbff7a-4ed0-4c17-bd01-1888199225b3" containerID="c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a" exitCode=0 Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.302557 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" event={"ID":"b5dbff7a-4ed0-4c17-bd01-1888199225b3","Type":"ContainerDied","Data":"c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a"} Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.326896 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.360693 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.377692 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.402092 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.423504 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.443847 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.458550 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.477196 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.492694 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.504769 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.518765 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.531169 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.544945 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.557212 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.568136 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.578831 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.591662 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.605284 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.616834 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.632527 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.661028 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.675385 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.705364 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.710569 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.718096 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.738581 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.780526 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.823195 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.862840 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.901828 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.923003 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.946129 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:13 crc kubenswrapper[4827]: I0131 03:47:13.975132 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.016943 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.058740 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.068060 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 16:34:39.269553506 +0000 UTC Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.098572 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.109422 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.109463 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:14 crc kubenswrapper[4827]: E0131 03:47:14.109543 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:14 crc kubenswrapper[4827]: E0131 03:47:14.109817 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.139919 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.182294 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.224590 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.260969 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.302327 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.310776 4827 generic.go:334] "Generic (PLEG): container finished" podID="b5dbff7a-4ed0-4c17-bd01-1888199225b3" containerID="f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7" exitCode=0 Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.310866 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" event={"ID":"b5dbff7a-4ed0-4c17-bd01-1888199225b3","Type":"ContainerDied","Data":"f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7"} Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.349037 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: E0131 03:47:14.357051 4827 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.389250 4827 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.396942 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.397013 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.397033 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.397197 4827 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.402721 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.449907 4827 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.450239 4827 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.451822 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.452047 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.452067 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.452089 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.452183 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:14Z","lastTransitionTime":"2026-01-31T03:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:14 crc kubenswrapper[4827]: E0131 03:47:14.468195 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.472783 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.472832 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.472850 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.472874 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.472923 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:14Z","lastTransitionTime":"2026-01-31T03:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.482283 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: E0131 03:47:14.492542 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.497505 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.497540 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.497552 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.497571 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.497582 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:14Z","lastTransitionTime":"2026-01-31T03:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:14 crc kubenswrapper[4827]: E0131 03:47:14.513400 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.516907 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.516944 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.516957 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.517018 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.517031 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:14Z","lastTransitionTime":"2026-01-31T03:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.521963 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: E0131 03:47:14.536723 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.540716 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.540754 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.540765 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.540781 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.540792 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:14Z","lastTransitionTime":"2026-01-31T03:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:14 crc kubenswrapper[4827]: E0131 03:47:14.559442 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: E0131 03:47:14.559617 4827 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.561363 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.561403 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.561418 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.561473 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.561491 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:14Z","lastTransitionTime":"2026-01-31T03:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.562821 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.599352 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.641459 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.664569 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.664615 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.664630 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.664656 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.664672 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:14Z","lastTransitionTime":"2026-01-31T03:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.678028 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.726053 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.756764 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.767631 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.767662 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.767673 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.767686 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.767695 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:14Z","lastTransitionTime":"2026-01-31T03:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.801472 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.848378 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.870210 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.870255 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.870270 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.870290 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.870304 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:14Z","lastTransitionTime":"2026-01-31T03:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.889334 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.921787 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.965408 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:14Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.973559 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.973627 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.973649 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.973679 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:14 crc kubenswrapper[4827]: I0131 03:47:14.973703 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:14Z","lastTransitionTime":"2026-01-31T03:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.005602 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:15Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.040035 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:15Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.068482 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 15:22:09.63999857 +0000 UTC Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.076692 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.077077 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.077241 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.077395 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.077527 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:15Z","lastTransitionTime":"2026-01-31T03:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.109264 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:15 crc kubenswrapper[4827]: E0131 03:47:15.109436 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.181472 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.181543 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.181561 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.181596 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.181617 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:15Z","lastTransitionTime":"2026-01-31T03:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.285048 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.285105 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.285122 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.285147 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.285166 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:15Z","lastTransitionTime":"2026-01-31T03:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.319300 4827 generic.go:334] "Generic (PLEG): container finished" podID="b5dbff7a-4ed0-4c17-bd01-1888199225b3" containerID="b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660" exitCode=0 Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.319389 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" event={"ID":"b5dbff7a-4ed0-4c17-bd01-1888199225b3","Type":"ContainerDied","Data":"b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660"} Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.327424 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerStarted","Data":"2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d"} Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.355634 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:15Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.376989 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:15Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.387860 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.387959 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.387985 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.388017 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.388041 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:15Z","lastTransitionTime":"2026-01-31T03:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.403816 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:15Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.425980 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:15Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.440074 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:15Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.458801 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:15Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.476675 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:15Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.517721 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.517761 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.517773 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.517796 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.517807 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:15Z","lastTransitionTime":"2026-01-31T03:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.521043 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:15Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.538334 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:15Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.552989 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:15Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.568705 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:15Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.581918 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:15Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.605469 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:15Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.620276 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.620302 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.620313 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.620329 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.620339 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:15Z","lastTransitionTime":"2026-01-31T03:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.623457 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:15Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.723803 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.724216 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.724234 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.724256 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.724274 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:15Z","lastTransitionTime":"2026-01-31T03:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.819872 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.820026 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.820073 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.820329 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.820385 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:15 crc kubenswrapper[4827]: E0131 03:47:15.820535 4827 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:47:15 crc kubenswrapper[4827]: E0131 03:47:15.820608 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:23.820583557 +0000 UTC m=+36.507664046 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:47:15 crc kubenswrapper[4827]: E0131 03:47:15.821171 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:47:23.821149985 +0000 UTC m=+36.508230474 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:47:15 crc kubenswrapper[4827]: E0131 03:47:15.821253 4827 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:47:15 crc kubenswrapper[4827]: E0131 03:47:15.821298 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:23.821285208 +0000 UTC m=+36.508365697 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:47:15 crc kubenswrapper[4827]: E0131 03:47:15.821395 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:47:15 crc kubenswrapper[4827]: E0131 03:47:15.821415 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:47:15 crc kubenswrapper[4827]: E0131 03:47:15.821434 4827 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:15 crc kubenswrapper[4827]: E0131 03:47:15.821479 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:23.821465263 +0000 UTC m=+36.508545752 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:15 crc kubenswrapper[4827]: E0131 03:47:15.821555 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:47:15 crc kubenswrapper[4827]: E0131 03:47:15.821575 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:47:15 crc kubenswrapper[4827]: E0131 03:47:15.821590 4827 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:15 crc kubenswrapper[4827]: E0131 03:47:15.821635 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:23.821621328 +0000 UTC m=+36.508701817 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.832849 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.832978 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.833019 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.833116 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.833137 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:15Z","lastTransitionTime":"2026-01-31T03:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.936766 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.936862 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.936944 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.936975 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:15 crc kubenswrapper[4827]: I0131 03:47:15.937051 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:15Z","lastTransitionTime":"2026-01-31T03:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.039567 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.039641 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.039660 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.039684 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.039702 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:16Z","lastTransitionTime":"2026-01-31T03:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.069590 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 02:38:03.232678826 +0000 UTC Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.109186 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.109201 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:16 crc kubenswrapper[4827]: E0131 03:47:16.109412 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:16 crc kubenswrapper[4827]: E0131 03:47:16.109503 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.143283 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.143378 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.143409 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.143441 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.143460 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:16Z","lastTransitionTime":"2026-01-31T03:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.246723 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.246772 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.246788 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.246807 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.246819 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:16Z","lastTransitionTime":"2026-01-31T03:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.334969 4827 generic.go:334] "Generic (PLEG): container finished" podID="b5dbff7a-4ed0-4c17-bd01-1888199225b3" containerID="aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39" exitCode=0 Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.335018 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" event={"ID":"b5dbff7a-4ed0-4c17-bd01-1888199225b3","Type":"ContainerDied","Data":"aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39"} Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.350093 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.350151 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.350171 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.350196 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.350215 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:16Z","lastTransitionTime":"2026-01-31T03:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.353146 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:16Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.374752 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:16Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.391642 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:16Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.409446 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:16Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.431129 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:16Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.455565 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.455639 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.455658 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.455684 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.455704 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:16Z","lastTransitionTime":"2026-01-31T03:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.460233 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:16Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.477557 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:16Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.500142 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:16Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.531306 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:16Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.543694 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:16Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.558641 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.558670 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.558678 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.558690 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.558700 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:16Z","lastTransitionTime":"2026-01-31T03:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.564556 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:16Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.579642 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:16Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.592446 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:16Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.607222 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:16Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.661572 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.661613 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.661624 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.661638 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.661649 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:16Z","lastTransitionTime":"2026-01-31T03:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.764470 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.764513 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.764526 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.764545 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.764559 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:16Z","lastTransitionTime":"2026-01-31T03:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.867194 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.867226 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.867234 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.867247 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.867257 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:16Z","lastTransitionTime":"2026-01-31T03:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.970143 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.970175 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.970183 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.970199 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:16 crc kubenswrapper[4827]: I0131 03:47:16.970209 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:16Z","lastTransitionTime":"2026-01-31T03:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.069872 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 18:18:30.466747469 +0000 UTC Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.073698 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.073737 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.073744 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.073758 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.073772 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:17Z","lastTransitionTime":"2026-01-31T03:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.109505 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:17 crc kubenswrapper[4827]: E0131 03:47:17.109672 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.179244 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.179671 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.179699 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.179727 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.179744 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:17Z","lastTransitionTime":"2026-01-31T03:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.283300 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.283378 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.283400 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.283429 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.283450 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:17Z","lastTransitionTime":"2026-01-31T03:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.345346 4827 generic.go:334] "Generic (PLEG): container finished" podID="b5dbff7a-4ed0-4c17-bd01-1888199225b3" containerID="76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178" exitCode=0 Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.345396 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" event={"ID":"b5dbff7a-4ed0-4c17-bd01-1888199225b3","Type":"ContainerDied","Data":"76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178"} Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.368253 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.388577 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.388173 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.388799 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.388812 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.388827 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.388855 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:17Z","lastTransitionTime":"2026-01-31T03:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.408569 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.427630 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.446797 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.469747 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.481844 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.491969 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.492015 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.492033 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.492057 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.492074 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:17Z","lastTransitionTime":"2026-01-31T03:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.498375 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.516773 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.533663 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.550011 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.566610 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.582026 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.594411 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.594457 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.594473 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.594495 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.594510 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:17Z","lastTransitionTime":"2026-01-31T03:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.594611 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.697617 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.697698 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.697722 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.697753 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.697778 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:17Z","lastTransitionTime":"2026-01-31T03:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.801572 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.801651 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.801671 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.801696 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.801714 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:17Z","lastTransitionTime":"2026-01-31T03:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.869496 4827 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.922218 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.922310 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.922341 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.922370 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:17 crc kubenswrapper[4827]: I0131 03:47:17.922390 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:17Z","lastTransitionTime":"2026-01-31T03:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.042720 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.042763 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.042774 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.042789 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.042801 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:18Z","lastTransitionTime":"2026-01-31T03:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.071019 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 11:27:35.911621691 +0000 UTC Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.109496 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.109604 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:18 crc kubenswrapper[4827]: E0131 03:47:18.110006 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:18 crc kubenswrapper[4827]: E0131 03:47:18.110232 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.133061 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.146126 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.146183 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.146216 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.146234 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.146246 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:18Z","lastTransitionTime":"2026-01-31T03:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.157140 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.180401 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.202408 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.218765 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.241301 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.248066 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.248111 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.248124 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.248144 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.248160 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:18Z","lastTransitionTime":"2026-01-31T03:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.260708 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.280187 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.301517 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.321763 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.343184 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.350573 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.350611 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.350622 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.350638 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.350652 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:18Z","lastTransitionTime":"2026-01-31T03:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.356076 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerStarted","Data":"8b31ec94e62ff350ce9c7bd40cf6a0b0fc7a5d327d85a0d388736e5e27b3aeb4"} Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.356633 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.361523 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" event={"ID":"b5dbff7a-4ed0-4c17-bd01-1888199225b3","Type":"ContainerStarted","Data":"cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79"} Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.365156 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.389959 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.398092 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.414550 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.432460 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.450109 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.453180 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.453252 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.453276 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.453310 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.453330 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:18Z","lastTransitionTime":"2026-01-31T03:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.468042 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.489270 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.508354 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.543309 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b31ec94e62ff350ce9c7bd40cf6a0b0fc7a5d327d85a0d388736e5e27b3aeb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.556242 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.556301 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.556319 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.556344 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.556364 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:18Z","lastTransitionTime":"2026-01-31T03:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.557698 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.574617 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.592756 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.613073 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.640300 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.659395 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.659460 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.659476 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.659498 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.659510 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:18Z","lastTransitionTime":"2026-01-31T03:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.664141 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.680503 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.697721 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.763243 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.763314 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.763338 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.763366 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.763386 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:18Z","lastTransitionTime":"2026-01-31T03:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.866459 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.866508 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.866526 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.866549 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.866565 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:18Z","lastTransitionTime":"2026-01-31T03:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.969439 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.969488 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.969504 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.969527 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:18 crc kubenswrapper[4827]: I0131 03:47:18.969544 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:18Z","lastTransitionTime":"2026-01-31T03:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.071188 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:29:00.272662252 +0000 UTC Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.072590 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.072652 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.072672 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.072700 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.072719 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:19Z","lastTransitionTime":"2026-01-31T03:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.109006 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:19 crc kubenswrapper[4827]: E0131 03:47:19.109196 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.176560 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.176628 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.176652 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.176681 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.176700 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:19Z","lastTransitionTime":"2026-01-31T03:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.279761 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.279862 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.279902 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.279928 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.279950 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:19Z","lastTransitionTime":"2026-01-31T03:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.364638 4827 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.364925 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.382720 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.382781 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.382807 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.382835 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.382860 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:19Z","lastTransitionTime":"2026-01-31T03:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.425747 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.450852 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.474795 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b31ec94e62ff350ce9c7bd40cf6a0b0fc7a5d327d85a0d388736e5e27b3aeb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.485768 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.486089 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.486247 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.486359 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.486507 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:19Z","lastTransitionTime":"2026-01-31T03:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.492642 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.514348 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.532710 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.551975 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.577150 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.589861 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.589918 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.589930 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.589947 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.589972 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:19Z","lastTransitionTime":"2026-01-31T03:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.598587 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.611927 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.620835 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.633265 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.645750 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.657228 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.674335 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.692175 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.692208 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.692221 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.692238 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.692251 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:19Z","lastTransitionTime":"2026-01-31T03:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.797866 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.798125 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.798183 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.798318 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.798374 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:19Z","lastTransitionTime":"2026-01-31T03:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.901009 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.901234 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.901341 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.901419 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:19 crc kubenswrapper[4827]: I0131 03:47:19.901499 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:19Z","lastTransitionTime":"2026-01-31T03:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.004282 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.004312 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.004320 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.004332 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.004340 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:20Z","lastTransitionTime":"2026-01-31T03:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.071721 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 17:52:00.178147657 +0000 UTC Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.106447 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.106497 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.106512 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.106533 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.106547 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:20Z","lastTransitionTime":"2026-01-31T03:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.109840 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.109866 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:20 crc kubenswrapper[4827]: E0131 03:47:20.110053 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:20 crc kubenswrapper[4827]: E0131 03:47:20.110257 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.209321 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.209363 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.209373 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.209389 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.209399 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:20Z","lastTransitionTime":"2026-01-31T03:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.301038 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.312282 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.312311 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.312321 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.312337 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.312351 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:20Z","lastTransitionTime":"2026-01-31T03:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.374566 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hj2zw_da9e7773-a24b-4e8d-b479-97e2594db0d4/ovnkube-controller/0.log" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.377990 4827 generic.go:334] "Generic (PLEG): container finished" podID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerID="8b31ec94e62ff350ce9c7bd40cf6a0b0fc7a5d327d85a0d388736e5e27b3aeb4" exitCode=1 Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.378078 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerDied","Data":"8b31ec94e62ff350ce9c7bd40cf6a0b0fc7a5d327d85a0d388736e5e27b3aeb4"} Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.378596 4827 scope.go:117] "RemoveContainer" containerID="8b31ec94e62ff350ce9c7bd40cf6a0b0fc7a5d327d85a0d388736e5e27b3aeb4" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.402484 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.416363 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.416426 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.416451 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.416481 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.416505 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:20Z","lastTransitionTime":"2026-01-31T03:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.425178 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.454613 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.481751 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.505069 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.519134 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.519422 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.519591 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.519775 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.519941 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:20Z","lastTransitionTime":"2026-01-31T03:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.526543 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.545419 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.563785 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.583416 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.606952 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.623419 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.623466 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.623479 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.623496 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.623507 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:20Z","lastTransitionTime":"2026-01-31T03:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.630759 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.651403 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.676298 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b31ec94e62ff350ce9c7bd40cf6a0b0fc7a5d327d85a0d388736e5e27b3aeb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b31ec94e62ff350ce9c7bd40cf6a0b0fc7a5d327d85a0d388736e5e27b3aeb4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:20Z\\\",\\\"message\\\":\\\"t handler 6 for removal\\\\nI0131 03:47:20.176021 6151 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 03:47:20.176099 6151 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:47:20.176118 6151 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 03:47:20.176122 6151 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 03:47:20.176146 6151 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:47:20.176216 6151 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:47:20.176227 6151 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:47:20.176266 6151 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:20.176290 6151 factory.go:656] Stopping watch factory\\\\nI0131 03:47:20.176311 6151 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:47:20.176399 6151 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:20.176726 6151 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.690694 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.726836 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.726905 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.726919 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.726941 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.726956 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:20Z","lastTransitionTime":"2026-01-31T03:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.829493 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.829529 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.829540 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.829556 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.829568 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:20Z","lastTransitionTime":"2026-01-31T03:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.932371 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.932424 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.932440 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.932495 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:20 crc kubenswrapper[4827]: I0131 03:47:20.932520 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:20Z","lastTransitionTime":"2026-01-31T03:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.035637 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.035684 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.035696 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.035715 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.035733 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:21Z","lastTransitionTime":"2026-01-31T03:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.072032 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 09:28:58.025739207 +0000 UTC Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.109672 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:21 crc kubenswrapper[4827]: E0131 03:47:21.109783 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.138244 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.138301 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.138314 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.138333 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.138343 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:21Z","lastTransitionTime":"2026-01-31T03:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.240984 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.241014 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.241021 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.241033 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.241041 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:21Z","lastTransitionTime":"2026-01-31T03:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.343107 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.343135 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.343144 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.343156 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.343164 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:21Z","lastTransitionTime":"2026-01-31T03:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.384252 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hj2zw_da9e7773-a24b-4e8d-b479-97e2594db0d4/ovnkube-controller/0.log" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.387347 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerStarted","Data":"df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b"} Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.387842 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.404401 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:21Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.422753 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:21Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.438400 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:21Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.446382 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.446476 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.446495 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.446522 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.446541 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:21Z","lastTransitionTime":"2026-01-31T03:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.452786 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:21Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.470800 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:21Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.501566 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b31ec94e62ff350ce9c7bd40cf6a0b0fc7a5d327d85a0d388736e5e27b3aeb4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:20Z\\\",\\\"message\\\":\\\"t handler 6 for removal\\\\nI0131 03:47:20.176021 6151 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 03:47:20.176099 6151 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:47:20.176118 6151 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 03:47:20.176122 6151 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 03:47:20.176146 6151 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:47:20.176216 6151 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:47:20.176227 6151 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:47:20.176266 6151 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:20.176290 6151 factory.go:656] Stopping watch factory\\\\nI0131 03:47:20.176311 6151 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:47:20.176399 6151 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:20.176726 6151 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:21Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.519238 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:21Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.534352 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:21Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.548956 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.549032 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.549050 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.549074 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.549092 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:21Z","lastTransitionTime":"2026-01-31T03:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.552477 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:21Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.573488 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:21Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.588873 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:21Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.604522 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:21Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.617752 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:21Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.629274 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:21Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.651595 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.651651 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.651669 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.651694 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.651712 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:21Z","lastTransitionTime":"2026-01-31T03:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.755006 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.755115 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.755141 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.755320 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.755347 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:21Z","lastTransitionTime":"2026-01-31T03:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.858651 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.859036 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.859237 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.859387 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.859524 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:21Z","lastTransitionTime":"2026-01-31T03:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.963645 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.964013 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.964150 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.964273 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:21 crc kubenswrapper[4827]: I0131 03:47:21.964397 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:21Z","lastTransitionTime":"2026-01-31T03:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.067295 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.067364 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.067389 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.067422 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.067447 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:22Z","lastTransitionTime":"2026-01-31T03:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.072813 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 20:01:00.159690573 +0000 UTC Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.109575 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.109613 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:22 crc kubenswrapper[4827]: E0131 03:47:22.109784 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:22 crc kubenswrapper[4827]: E0131 03:47:22.109950 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.171250 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.171313 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.171330 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.171750 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.171805 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:22Z","lastTransitionTime":"2026-01-31T03:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.275673 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.276051 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.276245 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.276417 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.276587 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:22Z","lastTransitionTime":"2026-01-31T03:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.277873 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv"] Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.278648 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.281791 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.282164 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.299568 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rblt7\" (UniqueName: \"kubernetes.io/projected/f81e67c9-6345-48e1-91e3-794421cb3fdd-kube-api-access-rblt7\") pod \"ovnkube-control-plane-749d76644c-l5njv\" (UID: \"f81e67c9-6345-48e1-91e3-794421cb3fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.299693 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f81e67c9-6345-48e1-91e3-794421cb3fdd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-l5njv\" (UID: \"f81e67c9-6345-48e1-91e3-794421cb3fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.299755 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f81e67c9-6345-48e1-91e3-794421cb3fdd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-l5njv\" (UID: \"f81e67c9-6345-48e1-91e3-794421cb3fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.299803 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f81e67c9-6345-48e1-91e3-794421cb3fdd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-l5njv\" (UID: \"f81e67c9-6345-48e1-91e3-794421cb3fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.309876 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.332453 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.348167 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.367388 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.379820 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.379923 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.379943 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.379965 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.379981 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:22Z","lastTransitionTime":"2026-01-31T03:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.393167 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hj2zw_da9e7773-a24b-4e8d-b479-97e2594db0d4/ovnkube-controller/1.log" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.394100 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hj2zw_da9e7773-a24b-4e8d-b479-97e2594db0d4/ovnkube-controller/0.log" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.396964 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b31ec94e62ff350ce9c7bd40cf6a0b0fc7a5d327d85a0d388736e5e27b3aeb4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:20Z\\\",\\\"message\\\":\\\"t handler 6 for removal\\\\nI0131 03:47:20.176021 6151 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 03:47:20.176099 6151 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:47:20.176118 6151 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 03:47:20.176122 6151 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 03:47:20.176146 6151 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:47:20.176216 6151 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:47:20.176227 6151 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:47:20.176266 6151 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:20.176290 6151 factory.go:656] Stopping watch factory\\\\nI0131 03:47:20.176311 6151 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:47:20.176399 6151 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:20.176726 6151 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.398219 4827 generic.go:334] "Generic (PLEG): container finished" podID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerID="df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b" exitCode=1 Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.398276 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerDied","Data":"df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b"} Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.398321 4827 scope.go:117] "RemoveContainer" containerID="8b31ec94e62ff350ce9c7bd40cf6a0b0fc7a5d327d85a0d388736e5e27b3aeb4" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.399336 4827 scope.go:117] "RemoveContainer" containerID="df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b" Jan 31 03:47:22 crc kubenswrapper[4827]: E0131 03:47:22.399602 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.400844 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f81e67c9-6345-48e1-91e3-794421cb3fdd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-l5njv\" (UID: \"f81e67c9-6345-48e1-91e3-794421cb3fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.400953 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f81e67c9-6345-48e1-91e3-794421cb3fdd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-l5njv\" (UID: \"f81e67c9-6345-48e1-91e3-794421cb3fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.401017 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f81e67c9-6345-48e1-91e3-794421cb3fdd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-l5njv\" (UID: \"f81e67c9-6345-48e1-91e3-794421cb3fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.401129 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rblt7\" (UniqueName: \"kubernetes.io/projected/f81e67c9-6345-48e1-91e3-794421cb3fdd-kube-api-access-rblt7\") pod \"ovnkube-control-plane-749d76644c-l5njv\" (UID: \"f81e67c9-6345-48e1-91e3-794421cb3fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.402381 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f81e67c9-6345-48e1-91e3-794421cb3fdd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-l5njv\" (UID: \"f81e67c9-6345-48e1-91e3-794421cb3fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.403066 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f81e67c9-6345-48e1-91e3-794421cb3fdd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-l5njv\" (UID: \"f81e67c9-6345-48e1-91e3-794421cb3fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.409615 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f81e67c9-6345-48e1-91e3-794421cb3fdd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-l5njv\" (UID: \"f81e67c9-6345-48e1-91e3-794421cb3fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.418268 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.437140 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rblt7\" (UniqueName: \"kubernetes.io/projected/f81e67c9-6345-48e1-91e3-794421cb3fdd-kube-api-access-rblt7\") pod \"ovnkube-control-plane-749d76644c-l5njv\" (UID: \"f81e67c9-6345-48e1-91e3-794421cb3fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.441958 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.461230 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.479686 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.485281 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.485342 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.485353 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.485376 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.485387 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:22Z","lastTransitionTime":"2026-01-31T03:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.499020 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.515524 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.530319 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.544868 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.561253 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f81e67c9-6345-48e1-91e3-794421cb3fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l5njv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.580817 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.588408 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.588481 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.588507 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.588540 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.588563 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:22Z","lastTransitionTime":"2026-01-31T03:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.602727 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.604009 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.620620 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: W0131 03:47:22.621143 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf81e67c9_6345_48e1_91e3_794421cb3fdd.slice/crio-178143daad5cf66e1f950aa2a67a6ce3418df269764a14aaa4fd318ff70d1b36 WatchSource:0}: Error finding container 178143daad5cf66e1f950aa2a67a6ce3418df269764a14aaa4fd318ff70d1b36: Status 404 returned error can't find the container with id 178143daad5cf66e1f950aa2a67a6ce3418df269764a14aaa4fd318ff70d1b36 Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.640807 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.663598 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.680106 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.691437 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.691471 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.691482 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.691499 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.691513 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:22Z","lastTransitionTime":"2026-01-31T03:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.694813 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.710290 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f81e67c9-6345-48e1-91e3-794421cb3fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l5njv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.731087 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.751764 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.766842 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.780744 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.793679 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.793748 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.793766 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.793793 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.793815 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:22Z","lastTransitionTime":"2026-01-31T03:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.796816 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.820984 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b31ec94e62ff350ce9c7bd40cf6a0b0fc7a5d327d85a0d388736e5e27b3aeb4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:20Z\\\",\\\"message\\\":\\\"t handler 6 for removal\\\\nI0131 03:47:20.176021 6151 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 03:47:20.176099 6151 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:47:20.176118 6151 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 03:47:20.176122 6151 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 03:47:20.176146 6151 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:47:20.176216 6151 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:47:20.176227 6151 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:47:20.176266 6151 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:20.176290 6151 factory.go:656] Stopping watch factory\\\\nI0131 03:47:20.176311 6151 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:47:20.176399 6151 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:20.176726 6151 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:21Z\\\",\\\"message\\\":\\\"/informers/factory.go:160\\\\nI0131 03:47:21.368788 6278 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.368815 6278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.368968 6278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.369298 6278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 03:47:21.369588 6278 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 03:47:21.369687 6278 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 03:47:21.369768 6278 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 03:47:21.369822 6278 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 03:47:21.369871 6278 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 03:47:21.369983 6278 factory.go:656] Stopping watch factory\\\\nI0131 03:47:21.370079 6278 ovnkube.go:599] Stopped ovnkube\\\\nI0131 03:47:21.369737 6278 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 03:47:21.370042 6278 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.834151 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.847370 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.897091 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.897158 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.897178 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.897218 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.897237 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:22Z","lastTransitionTime":"2026-01-31T03:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.999636 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.999684 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.999697 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.999713 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:22 crc kubenswrapper[4827]: I0131 03:47:22.999726 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:22Z","lastTransitionTime":"2026-01-31T03:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.073480 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:31:27.046275481 +0000 UTC Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.127171 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:23 crc kubenswrapper[4827]: E0131 03:47:23.127514 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.129177 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.129200 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.129212 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.129227 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.129239 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:23Z","lastTransitionTime":"2026-01-31T03:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.231987 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.232322 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.232336 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.232355 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.232367 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:23Z","lastTransitionTime":"2026-01-31T03:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.335165 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.335229 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.335249 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.335276 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.335294 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:23Z","lastTransitionTime":"2026-01-31T03:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.411607 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" event={"ID":"f81e67c9-6345-48e1-91e3-794421cb3fdd","Type":"ContainerStarted","Data":"b967e5b38d5efa02b8cda05cf383b6aa9f24b819d055b86d12afe5290d6d5847"} Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.411707 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" event={"ID":"f81e67c9-6345-48e1-91e3-794421cb3fdd","Type":"ContainerStarted","Data":"e761c28f2d53c1a50d6ce7467012664d9d9d717222b6fc648c642ad97057113f"} Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.411738 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" event={"ID":"f81e67c9-6345-48e1-91e3-794421cb3fdd","Type":"ContainerStarted","Data":"178143daad5cf66e1f950aa2a67a6ce3418df269764a14aaa4fd318ff70d1b36"} Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.414790 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hj2zw_da9e7773-a24b-4e8d-b479-97e2594db0d4/ovnkube-controller/1.log" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.421467 4827 scope.go:117] "RemoveContainer" containerID="df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b" Jan 31 03:47:23 crc kubenswrapper[4827]: E0131 03:47:23.421722 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.438083 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.438129 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.438144 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.438165 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.438179 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.438224 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:23Z","lastTransitionTime":"2026-01-31T03:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.456670 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.479244 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.501730 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.518698 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.539487 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.541220 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.541423 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.541592 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.541747 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.541970 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:23Z","lastTransitionTime":"2026-01-31T03:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.571157 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b31ec94e62ff350ce9c7bd40cf6a0b0fc7a5d327d85a0d388736e5e27b3aeb4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:20Z\\\",\\\"message\\\":\\\"t handler 6 for removal\\\\nI0131 03:47:20.176021 6151 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 03:47:20.176099 6151 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:47:20.176118 6151 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 03:47:20.176122 6151 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 03:47:20.176146 6151 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:47:20.176216 6151 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:47:20.176227 6151 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:47:20.176266 6151 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:20.176290 6151 factory.go:656] Stopping watch factory\\\\nI0131 03:47:20.176311 6151 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:47:20.176399 6151 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:20.176726 6151 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:21Z\\\",\\\"message\\\":\\\"/informers/factory.go:160\\\\nI0131 03:47:21.368788 6278 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.368815 6278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.368968 6278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.369298 6278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 03:47:21.369588 6278 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 03:47:21.369687 6278 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 03:47:21.369768 6278 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 03:47:21.369822 6278 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 03:47:21.369871 6278 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 03:47:21.369983 6278 factory.go:656] Stopping watch factory\\\\nI0131 03:47:21.370079 6278 ovnkube.go:599] Stopped ovnkube\\\\nI0131 03:47:21.369737 6278 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 03:47:21.370042 6278 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.590632 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.611176 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.636626 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.644527 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.644763 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.644995 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.645142 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.645320 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:23Z","lastTransitionTime":"2026-01-31T03:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.665723 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.690784 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.718993 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f81e67c9-6345-48e1-91e3-794421cb3fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e761c28f2d53c1a50d6ce7467012664d9d9d717222b6fc648c642ad97057113f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b967e5b38d5efa02b8cda05cf383b6aa9f24b819d055b86d12afe5290d6d5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l5njv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.734362 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.747628 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.747909 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.748074 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.748213 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.748331 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:23Z","lastTransitionTime":"2026-01-31T03:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.750294 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.770379 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.793959 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:21Z\\\",\\\"message\\\":\\\"/informers/factory.go:160\\\\nI0131 03:47:21.368788 6278 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.368815 6278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.368968 6278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.369298 6278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 03:47:21.369588 6278 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 03:47:21.369687 6278 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 03:47:21.369768 6278 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 03:47:21.369822 6278 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 03:47:21.369871 6278 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 03:47:21.369983 6278 factory.go:656] Stopping watch factory\\\\nI0131 03:47:21.370079 6278 ovnkube.go:599] Stopped ovnkube\\\\nI0131 03:47:21.369737 6278 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 03:47:21.370042 6278 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.805150 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.825482 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.834089 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:47:23 crc kubenswrapper[4827]: E0131 03:47:23.834282 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:47:39.834260937 +0000 UTC m=+52.521341396 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.834594 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.834804 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.835094 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.835250 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:23 crc kubenswrapper[4827]: E0131 03:47:23.834807 4827 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:47:23 crc kubenswrapper[4827]: E0131 03:47:23.835481 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:39.835464693 +0000 UTC m=+52.522545152 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:47:23 crc kubenswrapper[4827]: E0131 03:47:23.834910 4827 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:47:23 crc kubenswrapper[4827]: E0131 03:47:23.835659 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:39.835647719 +0000 UTC m=+52.522728188 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:47:23 crc kubenswrapper[4827]: E0131 03:47:23.835190 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:47:23 crc kubenswrapper[4827]: E0131 03:47:23.835843 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:47:23 crc kubenswrapper[4827]: E0131 03:47:23.835976 4827 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:23 crc kubenswrapper[4827]: E0131 03:47:23.835347 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:47:23 crc kubenswrapper[4827]: E0131 03:47:23.836139 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:47:23 crc kubenswrapper[4827]: E0131 03:47:23.836161 4827 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:23 crc kubenswrapper[4827]: E0131 03:47:23.836096 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:39.836082332 +0000 UTC m=+52.523162801 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:23 crc kubenswrapper[4827]: E0131 03:47:23.836244 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 03:47:39.836223756 +0000 UTC m=+52.523304235 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.841864 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.850944 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.851140 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.851257 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.851375 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.851472 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:23Z","lastTransitionTime":"2026-01-31T03:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.858768 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.886251 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.904199 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f81e67c9-6345-48e1-91e3-794421cb3fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e761c28f2d53c1a50d6ce7467012664d9d9d717222b6fc648c642ad97057113f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b967e5b38d5efa02b8cda05cf383b6aa9f24b819d055b86d12afe5290d6d5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l5njv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.921637 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.941851 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.954411 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.954444 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.954453 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.954467 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.954475 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:23Z","lastTransitionTime":"2026-01-31T03:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.957294 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.973576 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:23 crc kubenswrapper[4827]: I0131 03:47:23.992990 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.007114 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.021664 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.057025 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.057063 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.057076 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.057093 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.057105 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:24Z","lastTransitionTime":"2026-01-31T03:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.074392 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 03:41:49.174254151 +0000 UTC Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.109785 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.109800 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:24 crc kubenswrapper[4827]: E0131 03:47:24.109955 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:24 crc kubenswrapper[4827]: E0131 03:47:24.109992 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.159328 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.159370 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.159379 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.159394 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.159403 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:24Z","lastTransitionTime":"2026-01-31T03:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.191706 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-2shng"] Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.192158 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:24 crc kubenswrapper[4827]: E0131 03:47:24.192233 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.208504 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.226633 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.239261 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs\") pod \"network-metrics-daemon-2shng\" (UID: \"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\") " pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.239350 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjlkt\" (UniqueName: \"kubernetes.io/projected/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-kube-api-access-hjlkt\") pod \"network-metrics-daemon-2shng\" (UID: \"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\") " pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.242155 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.257729 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.262040 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.262077 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.262086 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.262101 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.262112 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:24Z","lastTransitionTime":"2026-01-31T03:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.277614 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.295951 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.308917 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.323331 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f81e67c9-6345-48e1-91e3-794421cb3fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e761c28f2d53c1a50d6ce7467012664d9d9d717222b6fc648c642ad97057113f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b967e5b38d5efa02b8cda05cf383b6aa9f24b819d055b86d12afe5290d6d5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l5njv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.336696 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2shng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2shng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.340273 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlkt\" (UniqueName: \"kubernetes.io/projected/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-kube-api-access-hjlkt\") pod \"network-metrics-daemon-2shng\" (UID: \"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\") " pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.340392 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs\") pod \"network-metrics-daemon-2shng\" (UID: \"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\") " pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:24 crc kubenswrapper[4827]: E0131 03:47:24.340542 4827 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:47:24 crc kubenswrapper[4827]: E0131 03:47:24.340618 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs podName:cf80ec31-1f83-4ed6-84e3-055cf9c88bff nodeName:}" failed. No retries permitted until 2026-01-31 03:47:24.84059691 +0000 UTC m=+37.527677379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs") pod "network-metrics-daemon-2shng" (UID: "cf80ec31-1f83-4ed6-84e3-055cf9c88bff") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.354448 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.365799 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.365835 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.365844 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.365860 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.365870 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:24Z","lastTransitionTime":"2026-01-31T03:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.368760 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.370865 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjlkt\" (UniqueName: \"kubernetes.io/projected/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-kube-api-access-hjlkt\") pod \"network-metrics-daemon-2shng\" (UID: \"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\") " pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.383447 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.397451 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.412024 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.434907 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:21Z\\\",\\\"message\\\":\\\"/informers/factory.go:160\\\\nI0131 03:47:21.368788 6278 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.368815 6278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.368968 6278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.369298 6278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 03:47:21.369588 6278 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 03:47:21.369687 6278 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 03:47:21.369768 6278 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 03:47:21.369822 6278 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 03:47:21.369871 6278 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 03:47:21.369983 6278 factory.go:656] Stopping watch factory\\\\nI0131 03:47:21.370079 6278 ovnkube.go:599] Stopped ovnkube\\\\nI0131 03:47:21.369737 6278 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 03:47:21.370042 6278 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.448856 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.468565 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.468614 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.468626 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.468643 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.468657 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:24Z","lastTransitionTime":"2026-01-31T03:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.571024 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.571091 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.571110 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.571137 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.571157 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:24Z","lastTransitionTime":"2026-01-31T03:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.670307 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.670374 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.670391 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.670418 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.670436 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:24Z","lastTransitionTime":"2026-01-31T03:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:24 crc kubenswrapper[4827]: E0131 03:47:24.690261 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.695043 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.695107 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.695125 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.695148 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.695166 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:24Z","lastTransitionTime":"2026-01-31T03:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:24 crc kubenswrapper[4827]: E0131 03:47:24.710318 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.719551 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.719619 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.719639 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.719667 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.719686 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:24Z","lastTransitionTime":"2026-01-31T03:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:24 crc kubenswrapper[4827]: E0131 03:47:24.739771 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.744451 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.744493 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.744502 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.744518 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.744528 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:24Z","lastTransitionTime":"2026-01-31T03:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:24 crc kubenswrapper[4827]: E0131 03:47:24.757628 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.761336 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.761381 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.761396 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.761418 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.761434 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:24Z","lastTransitionTime":"2026-01-31T03:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:24 crc kubenswrapper[4827]: E0131 03:47:24.774274 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:24 crc kubenswrapper[4827]: E0131 03:47:24.774416 4827 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.776309 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.776347 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.776362 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.776382 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.776396 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:24Z","lastTransitionTime":"2026-01-31T03:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.845328 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs\") pod \"network-metrics-daemon-2shng\" (UID: \"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\") " pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:24 crc kubenswrapper[4827]: E0131 03:47:24.845547 4827 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:47:24 crc kubenswrapper[4827]: E0131 03:47:24.845667 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs podName:cf80ec31-1f83-4ed6-84e3-055cf9c88bff nodeName:}" failed. No retries permitted until 2026-01-31 03:47:25.845637756 +0000 UTC m=+38.532718265 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs") pod "network-metrics-daemon-2shng" (UID: "cf80ec31-1f83-4ed6-84e3-055cf9c88bff") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.879604 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.879677 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.879696 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.879716 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.879729 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:24Z","lastTransitionTime":"2026-01-31T03:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.982482 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.982539 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.982556 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.982582 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:24 crc kubenswrapper[4827]: I0131 03:47:24.982599 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:24Z","lastTransitionTime":"2026-01-31T03:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.075129 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 06:33:29.667858107 +0000 UTC Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.086048 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.086115 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.086138 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.086169 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.086195 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:25Z","lastTransitionTime":"2026-01-31T03:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.109671 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:25 crc kubenswrapper[4827]: E0131 03:47:25.109862 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.188862 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.189049 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.189081 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.189111 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.189145 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:25Z","lastTransitionTime":"2026-01-31T03:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.292427 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.292529 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.292548 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.292572 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.292590 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:25Z","lastTransitionTime":"2026-01-31T03:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.394841 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.394951 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.394970 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.394994 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.395012 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:25Z","lastTransitionTime":"2026-01-31T03:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.497379 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.497467 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.497485 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.497511 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.497529 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:25Z","lastTransitionTime":"2026-01-31T03:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.600196 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.600247 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.600263 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.600286 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.600306 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:25Z","lastTransitionTime":"2026-01-31T03:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.703335 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.703390 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.703411 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.703444 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.703461 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:25Z","lastTransitionTime":"2026-01-31T03:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.806027 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.806077 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.806094 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.806116 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.806133 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:25Z","lastTransitionTime":"2026-01-31T03:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.854651 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs\") pod \"network-metrics-daemon-2shng\" (UID: \"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\") " pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:25 crc kubenswrapper[4827]: E0131 03:47:25.854841 4827 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:47:25 crc kubenswrapper[4827]: E0131 03:47:25.854945 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs podName:cf80ec31-1f83-4ed6-84e3-055cf9c88bff nodeName:}" failed. No retries permitted until 2026-01-31 03:47:27.854922272 +0000 UTC m=+40.542002751 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs") pod "network-metrics-daemon-2shng" (UID: "cf80ec31-1f83-4ed6-84e3-055cf9c88bff") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.909296 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.909364 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.909386 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.909415 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:25 crc kubenswrapper[4827]: I0131 03:47:25.909436 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:25Z","lastTransitionTime":"2026-01-31T03:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.012272 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.012344 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.012366 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.012396 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.012420 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:26Z","lastTransitionTime":"2026-01-31T03:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.076105 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 19:02:52.024347466 +0000 UTC Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.109788 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.109816 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:26 crc kubenswrapper[4827]: E0131 03:47:26.109959 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.110002 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:26 crc kubenswrapper[4827]: E0131 03:47:26.110190 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:26 crc kubenswrapper[4827]: E0131 03:47:26.110313 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.115203 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.115258 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.115280 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.115303 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.115321 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:26Z","lastTransitionTime":"2026-01-31T03:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.218301 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.218375 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.218388 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.218410 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.218423 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:26Z","lastTransitionTime":"2026-01-31T03:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.322351 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.322433 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.322456 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.322486 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.322508 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:26Z","lastTransitionTime":"2026-01-31T03:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.425945 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.426010 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.426030 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.426057 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.426079 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:26Z","lastTransitionTime":"2026-01-31T03:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.530161 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.530233 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.530251 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.530276 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.530294 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:26Z","lastTransitionTime":"2026-01-31T03:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.633475 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.633538 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.633556 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.633581 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.633600 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:26Z","lastTransitionTime":"2026-01-31T03:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.736107 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.736162 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.736178 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.736201 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.736217 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:26Z","lastTransitionTime":"2026-01-31T03:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.839758 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.839807 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.839823 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.839846 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.839863 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:26Z","lastTransitionTime":"2026-01-31T03:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.942990 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.943043 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.943060 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.943083 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:26 crc kubenswrapper[4827]: I0131 03:47:26.943099 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:26Z","lastTransitionTime":"2026-01-31T03:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.049245 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.049319 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.049338 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.049364 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.049383 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:27Z","lastTransitionTime":"2026-01-31T03:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.076476 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 22:09:57.186207712 +0000 UTC Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.109823 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:27 crc kubenswrapper[4827]: E0131 03:47:27.110026 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.153148 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.153437 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.153581 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.153729 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.153873 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:27Z","lastTransitionTime":"2026-01-31T03:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.258580 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.258655 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.258680 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.258717 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.258743 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:27Z","lastTransitionTime":"2026-01-31T03:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.362501 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.362563 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.362586 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.362616 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.362638 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:27Z","lastTransitionTime":"2026-01-31T03:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.466715 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.466796 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.466816 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.466847 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.466870 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:27Z","lastTransitionTime":"2026-01-31T03:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.571088 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.571149 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.571168 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.571193 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.571211 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:27Z","lastTransitionTime":"2026-01-31T03:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.675952 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.676029 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.676047 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.676080 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.676102 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:27Z","lastTransitionTime":"2026-01-31T03:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.779594 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.779961 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.780173 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.780405 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.780637 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:27Z","lastTransitionTime":"2026-01-31T03:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.877226 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs\") pod \"network-metrics-daemon-2shng\" (UID: \"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\") " pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:27 crc kubenswrapper[4827]: E0131 03:47:27.877461 4827 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:47:27 crc kubenswrapper[4827]: E0131 03:47:27.877741 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs podName:cf80ec31-1f83-4ed6-84e3-055cf9c88bff nodeName:}" failed. No retries permitted until 2026-01-31 03:47:31.877713542 +0000 UTC m=+44.564794031 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs") pod "network-metrics-daemon-2shng" (UID: "cf80ec31-1f83-4ed6-84e3-055cf9c88bff") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.884396 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.884439 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.884455 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.884478 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.884495 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:27Z","lastTransitionTime":"2026-01-31T03:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.987976 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.988368 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.988523 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.988662 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:27 crc kubenswrapper[4827]: I0131 03:47:27.988810 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:27Z","lastTransitionTime":"2026-01-31T03:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.077276 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 15:27:56.498594547 +0000 UTC Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.092001 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.092070 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.092093 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.092123 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.092144 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:28Z","lastTransitionTime":"2026-01-31T03:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.110056 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.110075 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.110221 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:28 crc kubenswrapper[4827]: E0131 03:47:28.110723 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:28 crc kubenswrapper[4827]: E0131 03:47:28.110944 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:47:28 crc kubenswrapper[4827]: E0131 03:47:28.111096 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.132620 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.165805 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:21Z\\\",\\\"message\\\":\\\"/informers/factory.go:160\\\\nI0131 03:47:21.368788 6278 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.368815 6278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.368968 6278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.369298 6278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 03:47:21.369588 6278 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 03:47:21.369687 6278 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 03:47:21.369768 6278 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 03:47:21.369822 6278 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 03:47:21.369871 6278 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 03:47:21.369983 6278 factory.go:656] Stopping watch factory\\\\nI0131 03:47:21.370079 6278 ovnkube.go:599] Stopped ovnkube\\\\nI0131 03:47:21.369737 6278 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 03:47:21.370042 6278 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.183913 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.194531 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.194589 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.194608 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.194636 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.194654 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:28Z","lastTransitionTime":"2026-01-31T03:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.201711 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.229093 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.248986 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.265519 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.287435 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.296868 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.296984 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.297009 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.297042 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.297061 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:28Z","lastTransitionTime":"2026-01-31T03:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.309007 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.324372 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.342552 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f81e67c9-6345-48e1-91e3-794421cb3fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e761c28f2d53c1a50d6ce7467012664d9d9d717222b6fc648c642ad97057113f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b967e5b38d5efa02b8cda05cf383b6aa9f24b819d055b86d12afe5290d6d5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l5njv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.360696 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2shng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2shng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.381873 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.400519 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.400569 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.400589 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.400616 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.400638 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:28Z","lastTransitionTime":"2026-01-31T03:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.401477 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.420603 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.438417 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.504330 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.504586 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.504607 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.504631 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.504652 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:28Z","lastTransitionTime":"2026-01-31T03:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.608213 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.608304 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.608328 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.608360 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.608383 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:28Z","lastTransitionTime":"2026-01-31T03:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.711802 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.711966 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.711990 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.712015 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.712032 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:28Z","lastTransitionTime":"2026-01-31T03:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.815530 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.815594 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.815621 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.815647 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.815667 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:28Z","lastTransitionTime":"2026-01-31T03:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.918755 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.918836 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.918855 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.918914 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:28 crc kubenswrapper[4827]: I0131 03:47:28.918936 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:28Z","lastTransitionTime":"2026-01-31T03:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.022529 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.022603 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.022621 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.022647 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.022667 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:29Z","lastTransitionTime":"2026-01-31T03:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.078240 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 21:03:28.811544709 +0000 UTC Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.109360 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:29 crc kubenswrapper[4827]: E0131 03:47:29.109644 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.126162 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.126221 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.126251 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.126283 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.126309 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:29Z","lastTransitionTime":"2026-01-31T03:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.229644 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.229715 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.229735 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.229761 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.229780 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:29Z","lastTransitionTime":"2026-01-31T03:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.333839 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.333968 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.333996 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.334031 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.334056 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:29Z","lastTransitionTime":"2026-01-31T03:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.437784 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.437869 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.437951 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.437983 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.438007 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:29Z","lastTransitionTime":"2026-01-31T03:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.541033 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.541176 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.541250 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.541284 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.541305 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:29Z","lastTransitionTime":"2026-01-31T03:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.643507 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.643586 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.643608 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.643641 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.643663 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:29Z","lastTransitionTime":"2026-01-31T03:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.746742 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.746783 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.746808 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.746823 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.746834 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:29Z","lastTransitionTime":"2026-01-31T03:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.850171 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.850237 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.850254 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.850279 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.850297 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:29Z","lastTransitionTime":"2026-01-31T03:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.953051 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.953115 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.953133 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.953159 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:29 crc kubenswrapper[4827]: I0131 03:47:29.953177 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:29Z","lastTransitionTime":"2026-01-31T03:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.056183 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.056243 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.056260 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.056282 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.056300 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:30Z","lastTransitionTime":"2026-01-31T03:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.078499 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 19:45:45.722758771 +0000 UTC Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.109371 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.109485 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:30 crc kubenswrapper[4827]: E0131 03:47:30.109575 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.109642 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:30 crc kubenswrapper[4827]: E0131 03:47:30.109819 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:30 crc kubenswrapper[4827]: E0131 03:47:30.110004 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.158720 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.158778 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.158795 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.158822 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.158842 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:30Z","lastTransitionTime":"2026-01-31T03:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.261991 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.262055 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.262079 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.262106 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.262125 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:30Z","lastTransitionTime":"2026-01-31T03:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.365379 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.365428 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.365444 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.365466 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.365482 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:30Z","lastTransitionTime":"2026-01-31T03:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.468746 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.468822 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.468846 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.468908 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.468932 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:30Z","lastTransitionTime":"2026-01-31T03:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.572053 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.572115 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.572134 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.572160 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.572181 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:30Z","lastTransitionTime":"2026-01-31T03:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.675820 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.675941 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.675969 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.676008 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.676027 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:30Z","lastTransitionTime":"2026-01-31T03:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.779399 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.779436 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.779448 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.779464 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.779474 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:30Z","lastTransitionTime":"2026-01-31T03:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.882798 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.882858 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.882912 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.882946 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.882967 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:30Z","lastTransitionTime":"2026-01-31T03:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.986107 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.986179 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.986204 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.986231 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:30 crc kubenswrapper[4827]: I0131 03:47:30.986249 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:30Z","lastTransitionTime":"2026-01-31T03:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.079187 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 04:50:30.005475966 +0000 UTC Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.090183 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.090240 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.090261 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.090286 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.090303 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:31Z","lastTransitionTime":"2026-01-31T03:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.109822 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:31 crc kubenswrapper[4827]: E0131 03:47:31.110085 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.192966 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.193002 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.193019 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.193036 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.193047 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:31Z","lastTransitionTime":"2026-01-31T03:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.295825 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.295907 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.295928 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.295949 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.295963 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:31Z","lastTransitionTime":"2026-01-31T03:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.399098 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.399167 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.399184 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.399265 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.399289 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:31Z","lastTransitionTime":"2026-01-31T03:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.502137 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.502191 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.502205 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.502222 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.502234 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:31Z","lastTransitionTime":"2026-01-31T03:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.605269 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.605672 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.605690 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.605716 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.605731 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:31Z","lastTransitionTime":"2026-01-31T03:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.708278 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.708365 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.708382 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.708413 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.708431 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:31Z","lastTransitionTime":"2026-01-31T03:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.810923 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.810996 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.811021 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.811055 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.811076 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:31Z","lastTransitionTime":"2026-01-31T03:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.914394 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.914456 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.914481 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.914512 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.914534 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:31Z","lastTransitionTime":"2026-01-31T03:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:31 crc kubenswrapper[4827]: I0131 03:47:31.929288 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs\") pod \"network-metrics-daemon-2shng\" (UID: \"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\") " pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:31 crc kubenswrapper[4827]: E0131 03:47:31.929473 4827 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:47:31 crc kubenswrapper[4827]: E0131 03:47:31.929561 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs podName:cf80ec31-1f83-4ed6-84e3-055cf9c88bff nodeName:}" failed. No retries permitted until 2026-01-31 03:47:39.929534176 +0000 UTC m=+52.616614655 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs") pod "network-metrics-daemon-2shng" (UID: "cf80ec31-1f83-4ed6-84e3-055cf9c88bff") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.017852 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.017950 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.017972 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.017998 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.018016 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:32Z","lastTransitionTime":"2026-01-31T03:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.079662 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 20:13:26.546518412 +0000 UTC Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.109280 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.109407 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.109293 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:32 crc kubenswrapper[4827]: E0131 03:47:32.109487 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:32 crc kubenswrapper[4827]: E0131 03:47:32.109568 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:32 crc kubenswrapper[4827]: E0131 03:47:32.109715 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.121526 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.121579 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.121596 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.121618 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.121635 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:32Z","lastTransitionTime":"2026-01-31T03:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.225037 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.225099 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.225115 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.225139 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.225157 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:32Z","lastTransitionTime":"2026-01-31T03:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.328811 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.328945 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.328977 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.329006 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.329024 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:32Z","lastTransitionTime":"2026-01-31T03:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.431942 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.432001 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.432021 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.432044 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.432062 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:32Z","lastTransitionTime":"2026-01-31T03:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.535240 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.535321 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.535343 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.535374 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.535395 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:32Z","lastTransitionTime":"2026-01-31T03:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.638828 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.638919 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.638938 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.638962 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.638979 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:32Z","lastTransitionTime":"2026-01-31T03:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.743471 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.743536 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.743553 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.743579 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.743602 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:32Z","lastTransitionTime":"2026-01-31T03:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.851055 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.851197 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.851220 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.851248 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.851269 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:32Z","lastTransitionTime":"2026-01-31T03:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.954583 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.954653 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.954674 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.954699 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:32 crc kubenswrapper[4827]: I0131 03:47:32.954717 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:32Z","lastTransitionTime":"2026-01-31T03:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.057919 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.057983 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.058528 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.058953 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.059013 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:33Z","lastTransitionTime":"2026-01-31T03:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.080433 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 12:40:32.212449184 +0000 UTC Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.109977 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:33 crc kubenswrapper[4827]: E0131 03:47:33.110174 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.162069 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.162134 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.162156 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.162187 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.162212 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:33Z","lastTransitionTime":"2026-01-31T03:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.265200 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.265267 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.265288 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.265313 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.265331 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:33Z","lastTransitionTime":"2026-01-31T03:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.368976 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.369109 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.369126 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.369155 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.369175 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:33Z","lastTransitionTime":"2026-01-31T03:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.472452 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.472552 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.472571 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.472635 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.472659 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:33Z","lastTransitionTime":"2026-01-31T03:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.576602 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.576664 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.576681 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.576704 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.576723 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:33Z","lastTransitionTime":"2026-01-31T03:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.680266 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.680325 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.680345 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.680370 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.680392 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:33Z","lastTransitionTime":"2026-01-31T03:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.784035 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.784092 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.784109 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.784133 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.784150 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:33Z","lastTransitionTime":"2026-01-31T03:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.887742 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.887837 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.887857 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.887947 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.887970 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:33Z","lastTransitionTime":"2026-01-31T03:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.991096 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.991168 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.991190 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.991222 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:33 crc kubenswrapper[4827]: I0131 03:47:33.991244 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:33Z","lastTransitionTime":"2026-01-31T03:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.081117 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 12:00:29.924809071 +0000 UTC Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.095046 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.095106 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.095125 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.095153 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.095173 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:34Z","lastTransitionTime":"2026-01-31T03:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.109704 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.109779 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:34 crc kubenswrapper[4827]: E0131 03:47:34.109854 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.109869 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:34 crc kubenswrapper[4827]: E0131 03:47:34.110079 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:47:34 crc kubenswrapper[4827]: E0131 03:47:34.110332 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.197806 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.198053 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.198169 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.198230 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.198256 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:34Z","lastTransitionTime":"2026-01-31T03:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.301838 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.301958 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.301979 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.302003 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.302020 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:34Z","lastTransitionTime":"2026-01-31T03:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.406263 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.406326 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.406343 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.406369 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.406387 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:34Z","lastTransitionTime":"2026-01-31T03:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.509619 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.509685 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.509705 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.509728 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.509746 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:34Z","lastTransitionTime":"2026-01-31T03:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.613329 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.613427 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.613454 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.613485 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.613509 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:34Z","lastTransitionTime":"2026-01-31T03:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.717643 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.717718 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.717746 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.717773 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.717796 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:34Z","lastTransitionTime":"2026-01-31T03:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.792072 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.792262 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.792288 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.792370 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.792441 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:34Z","lastTransitionTime":"2026-01-31T03:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:34 crc kubenswrapper[4827]: E0131 03:47:34.814557 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:34Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.820752 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.820913 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.820943 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.820973 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.820994 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:34Z","lastTransitionTime":"2026-01-31T03:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:34 crc kubenswrapper[4827]: E0131 03:47:34.841114 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:34Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.846643 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.846727 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.846748 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.846770 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.846817 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:34Z","lastTransitionTime":"2026-01-31T03:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:34 crc kubenswrapper[4827]: E0131 03:47:34.868401 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:34Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.874427 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.874493 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.874510 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.874539 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.874560 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:34Z","lastTransitionTime":"2026-01-31T03:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:34 crc kubenswrapper[4827]: E0131 03:47:34.894990 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:34Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.899989 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.900036 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.900087 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.900113 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.900131 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:34Z","lastTransitionTime":"2026-01-31T03:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:34 crc kubenswrapper[4827]: E0131 03:47:34.919840 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:34Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:34 crc kubenswrapper[4827]: E0131 03:47:34.920216 4827 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.922488 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.922539 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.922556 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.922580 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:34 crc kubenswrapper[4827]: I0131 03:47:34.922599 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:34Z","lastTransitionTime":"2026-01-31T03:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.025340 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.025451 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.025478 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.025503 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.025521 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:35Z","lastTransitionTime":"2026-01-31T03:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.081960 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 22:12:55.0144686 +0000 UTC Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.109741 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:35 crc kubenswrapper[4827]: E0131 03:47:35.110007 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.129433 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.129520 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.129539 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.129600 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.129627 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:35Z","lastTransitionTime":"2026-01-31T03:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.233322 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.233430 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.233497 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.233535 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.233648 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:35Z","lastTransitionTime":"2026-01-31T03:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.336328 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.336406 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.336430 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.336459 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.336481 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:35Z","lastTransitionTime":"2026-01-31T03:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.440165 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.440218 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.440235 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.440260 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.440278 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:35Z","lastTransitionTime":"2026-01-31T03:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.542848 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.542991 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.543016 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.543049 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.543072 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:35Z","lastTransitionTime":"2026-01-31T03:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.647632 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.647710 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.647745 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.647777 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.647797 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:35Z","lastTransitionTime":"2026-01-31T03:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.751063 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.751116 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.751128 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.751145 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.751160 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:35Z","lastTransitionTime":"2026-01-31T03:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.854583 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.854987 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.855158 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.855314 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.855451 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:35Z","lastTransitionTime":"2026-01-31T03:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.958134 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.958184 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.958195 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.958213 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:35 crc kubenswrapper[4827]: I0131 03:47:35.958225 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:35Z","lastTransitionTime":"2026-01-31T03:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.061059 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.061129 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.061150 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.061175 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.061192 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:36Z","lastTransitionTime":"2026-01-31T03:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.082739 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 07:10:37.042201311 +0000 UTC Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.109511 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.109940 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.109975 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:36 crc kubenswrapper[4827]: E0131 03:47:36.110216 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:36 crc kubenswrapper[4827]: E0131 03:47:36.110441 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.110464 4827 scope.go:117] "RemoveContainer" containerID="df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b" Jan 31 03:47:36 crc kubenswrapper[4827]: E0131 03:47:36.110568 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.164634 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.164805 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.164940 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.165056 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.165291 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:36Z","lastTransitionTime":"2026-01-31T03:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.270342 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.270396 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.270417 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.270441 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.270458 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:36Z","lastTransitionTime":"2026-01-31T03:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.373573 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.373628 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.373645 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.373671 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.373688 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:36Z","lastTransitionTime":"2026-01-31T03:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.468628 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hj2zw_da9e7773-a24b-4e8d-b479-97e2594db0d4/ovnkube-controller/1.log" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.475441 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerStarted","Data":"d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a"} Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.476107 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.476136 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.476205 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.476227 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.476408 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.476428 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:36Z","lastTransitionTime":"2026-01-31T03:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.497007 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.517077 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.535988 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.559236 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f81e67c9-6345-48e1-91e3-794421cb3fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e761c28f2d53c1a50d6ce7467012664d9d9d717222b6fc648c642ad97057113f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b967e5b38d5efa02b8cda05cf383b6aa9f24b819d055b86d12afe5290d6d5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l5njv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.576427 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2shng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2shng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.578748 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.578786 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.578801 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.578819 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.578832 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:36Z","lastTransitionTime":"2026-01-31T03:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.596166 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.615302 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.645702 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.664969 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.677981 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.681764 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.681796 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.681806 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.681820 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.681831 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:36Z","lastTransitionTime":"2026-01-31T03:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.699184 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:21Z\\\",\\\"message\\\":\\\"/informers/factory.go:160\\\\nI0131 03:47:21.368788 6278 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.368815 6278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.368968 6278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.369298 6278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 03:47:21.369588 6278 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 03:47:21.369687 6278 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 03:47:21.369768 6278 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 03:47:21.369822 6278 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 03:47:21.369871 6278 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 03:47:21.369983 6278 factory.go:656] Stopping watch factory\\\\nI0131 03:47:21.370079 6278 ovnkube.go:599] Stopped ovnkube\\\\nI0131 03:47:21.369737 6278 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 03:47:21.370042 6278 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.709576 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.723251 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.737690 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.751497 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.764874 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.784589 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.784633 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.784645 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.784662 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.784675 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:36Z","lastTransitionTime":"2026-01-31T03:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.888087 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.888153 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.888175 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.888200 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.888218 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:36Z","lastTransitionTime":"2026-01-31T03:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.990374 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.990417 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.990434 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.990450 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:36 crc kubenswrapper[4827]: I0131 03:47:36.990462 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:36Z","lastTransitionTime":"2026-01-31T03:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.083435 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 02:45:51.755704714 +0000 UTC Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.097517 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.097592 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.097619 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.097652 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.097676 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:37Z","lastTransitionTime":"2026-01-31T03:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.109715 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:37 crc kubenswrapper[4827]: E0131 03:47:37.109909 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.199926 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.199961 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.199973 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.199989 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.200001 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:37Z","lastTransitionTime":"2026-01-31T03:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.302329 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.302373 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.302386 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.302422 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.302436 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:37Z","lastTransitionTime":"2026-01-31T03:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.405499 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.405540 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.405552 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.405567 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.405580 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:37Z","lastTransitionTime":"2026-01-31T03:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.481408 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hj2zw_da9e7773-a24b-4e8d-b479-97e2594db0d4/ovnkube-controller/2.log" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.482349 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hj2zw_da9e7773-a24b-4e8d-b479-97e2594db0d4/ovnkube-controller/1.log" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.486228 4827 generic.go:334] "Generic (PLEG): container finished" podID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerID="d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a" exitCode=1 Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.486281 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerDied","Data":"d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a"} Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.486327 4827 scope.go:117] "RemoveContainer" containerID="df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.487375 4827 scope.go:117] "RemoveContainer" containerID="d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a" Jan 31 03:47:37 crc kubenswrapper[4827]: E0131 03:47:37.487701 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.505605 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.507104 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.507135 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.507143 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.507158 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.507168 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:37Z","lastTransitionTime":"2026-01-31T03:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.536415 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.575003 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.586675 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.600163 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.609661 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.609699 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.609708 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.609723 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.609734 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:37Z","lastTransitionTime":"2026-01-31T03:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.618326 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:21Z\\\",\\\"message\\\":\\\"/informers/factory.go:160\\\\nI0131 03:47:21.368788 6278 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.368815 6278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.368968 6278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.369298 6278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 03:47:21.369588 6278 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 03:47:21.369687 6278 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 03:47:21.369768 6278 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 03:47:21.369822 6278 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 03:47:21.369871 6278 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 03:47:21.369983 6278 factory.go:656] Stopping watch factory\\\\nI0131 03:47:21.370079 6278 ovnkube.go:599] Stopped ovnkube\\\\nI0131 03:47:21.369737 6278 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 03:47:21.370042 6278 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:37Z\\\",\\\"message\\\":\\\"o:140\\\\nI0131 03:47:37.117967 6495 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:47:37.117998 6495 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 03:47:37.118006 6495 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 03:47:37.118020 6495 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118032 6495 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:47:37.118044 6495 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:47:37.118054 6495 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:47:37.118258 6495 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118439 6495 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118523 6495 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118541 6495 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:47:37.118959 6495 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.626618 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.636661 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.648973 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.661447 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.673797 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.683629 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2shng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2shng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.695157 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.710387 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.712507 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.712581 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.712602 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.713009 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.713042 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:37Z","lastTransitionTime":"2026-01-31T03:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.721356 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.732616 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f81e67c9-6345-48e1-91e3-794421cb3fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e761c28f2d53c1a50d6ce7467012664d9d9d717222b6fc648c642ad97057113f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b967e5b38d5efa02b8cda05cf383b6aa9f24b819d055b86d12afe5290d6d5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l5njv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.816114 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.816183 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.816201 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.816226 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.816246 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:37Z","lastTransitionTime":"2026-01-31T03:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.919217 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.919297 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.919322 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.919346 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:37 crc kubenswrapper[4827]: I0131 03:47:37.919362 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:37Z","lastTransitionTime":"2026-01-31T03:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.022758 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.022821 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.022840 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.022866 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.022912 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:38Z","lastTransitionTime":"2026-01-31T03:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.084173 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 14:13:03.621455087 +0000 UTC Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.109685 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.109789 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.109946 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:38 crc kubenswrapper[4827]: E0131 03:47:38.109965 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:38 crc kubenswrapper[4827]: E0131 03:47:38.110139 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:38 crc kubenswrapper[4827]: E0131 03:47:38.110297 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.126549 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.126615 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.126633 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.126660 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.126679 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:38Z","lastTransitionTime":"2026-01-31T03:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.136941 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.169542 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:21Z\\\",\\\"message\\\":\\\"/informers/factory.go:160\\\\nI0131 03:47:21.368788 6278 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.368815 6278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.368968 6278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.369298 6278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 03:47:21.369588 6278 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 03:47:21.369687 6278 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 03:47:21.369768 6278 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 03:47:21.369822 6278 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 03:47:21.369871 6278 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 03:47:21.369983 6278 factory.go:656] Stopping watch factory\\\\nI0131 03:47:21.370079 6278 ovnkube.go:599] Stopped ovnkube\\\\nI0131 03:47:21.369737 6278 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 03:47:21.370042 6278 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:37Z\\\",\\\"message\\\":\\\"o:140\\\\nI0131 03:47:37.117967 6495 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:47:37.117998 6495 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 03:47:37.118006 6495 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 03:47:37.118020 6495 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118032 6495 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:47:37.118044 6495 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:47:37.118054 6495 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:47:37.118258 6495 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118439 6495 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118523 6495 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118541 6495 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:47:37.118959 6495 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.186724 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.207537 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.230139 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.230203 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.230226 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.230255 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.230279 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:38Z","lastTransitionTime":"2026-01-31T03:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.231419 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.236384 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.247533 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.256995 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.275437 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.292720 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f81e67c9-6345-48e1-91e3-794421cb3fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e761c28f2d53c1a50d6ce7467012664d9d9d717222b6fc648c642ad97057113f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b967e5b38d5efa02b8cda05cf383b6aa9f24b819d055b86d12afe5290d6d5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l5njv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.308798 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2shng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2shng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.330534 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.334110 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.334160 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.334179 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.334204 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.334222 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:38Z","lastTransitionTime":"2026-01-31T03:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.352874 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.372002 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.392937 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.413728 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.433454 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.437873 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.437940 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.437954 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.437976 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.437992 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:38Z","lastTransitionTime":"2026-01-31T03:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.456613 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.479441 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.492229 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hj2zw_da9e7773-a24b-4e8d-b479-97e2594db0d4/ovnkube-controller/2.log" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.499222 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"350d4093-0635-4787-bdf3-9b099c5a772b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a6d75598a153bb0745bb06aa53961f13fa3ff1fd4addb84b3ecdc66d07ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0070a036855a06f83caec5b6ee636f1a7a1e7f3246b779e0abbcf6aebf259a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5f30db796321372897f564fda3a031f0714999315931e2a77b21fde674f4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.499294 4827 scope.go:117] "RemoveContainer" containerID="d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a" Jan 31 03:47:38 crc kubenswrapper[4827]: E0131 03:47:38.499751 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.519799 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.540570 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.540996 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.541093 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.541140 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.541161 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.541172 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:38Z","lastTransitionTime":"2026-01-31T03:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.563928 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.583024 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.604160 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.621688 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.642445 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f81e67c9-6345-48e1-91e3-794421cb3fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e761c28f2d53c1a50d6ce7467012664d9d9d717222b6fc648c642ad97057113f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b967e5b38d5efa02b8cda05cf383b6aa9f24b819d055b86d12afe5290d6d5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l5njv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.645661 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.645718 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.645741 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.645767 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.645785 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:38Z","lastTransitionTime":"2026-01-31T03:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.662437 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2shng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2shng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.683177 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.703355 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.724074 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.743762 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.752334 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.752597 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.752686 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.752792 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.752874 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:38Z","lastTransitionTime":"2026-01-31T03:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.764084 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.787581 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3bf53360747f9123455873f63ec6c0dedaa6e7743b25e0ebd796b4a5d0094b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:21Z\\\",\\\"message\\\":\\\"/informers/factory.go:160\\\\nI0131 03:47:21.368788 6278 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.368815 6278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.368968 6278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:21.369298 6278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 03:47:21.369588 6278 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 03:47:21.369687 6278 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 03:47:21.369768 6278 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 03:47:21.369822 6278 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 03:47:21.369871 6278 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 03:47:21.369983 6278 factory.go:656] Stopping watch factory\\\\nI0131 03:47:21.370079 6278 ovnkube.go:599] Stopped ovnkube\\\\nI0131 03:47:21.369737 6278 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 03:47:21.370042 6278 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:37Z\\\",\\\"message\\\":\\\"o:140\\\\nI0131 03:47:37.117967 6495 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:47:37.117998 6495 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 03:47:37.118006 6495 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 03:47:37.118020 6495 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118032 6495 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:47:37.118044 6495 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:47:37.118054 6495 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:47:37.118258 6495 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118439 6495 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118523 6495 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118541 6495 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:47:37.118959 6495 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.799583 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.813299 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2shng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2shng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.826544 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.841318 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.853855 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.857516 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.857558 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.857574 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.857599 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.857619 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:38Z","lastTransitionTime":"2026-01-31T03:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.867858 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f81e67c9-6345-48e1-91e3-794421cb3fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e761c28f2d53c1a50d6ce7467012664d9d9d717222b6fc648c642ad97057113f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b967e5b38d5efa02b8cda05cf383b6aa9f24b819d055b86d12afe5290d6d5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l5njv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.880692 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.893792 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.908079 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.919859 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.936320 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.958781 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:37Z\\\",\\\"message\\\":\\\"o:140\\\\nI0131 03:47:37.117967 6495 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:47:37.117998 6495 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 03:47:37.118006 6495 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 03:47:37.118020 6495 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118032 6495 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:47:37.118044 6495 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:47:37.118054 6495 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:47:37.118258 6495 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118439 6495 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118523 6495 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118541 6495 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:47:37.118959 6495 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.960513 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.960548 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.960559 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.960577 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.960586 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:38Z","lastTransitionTime":"2026-01-31T03:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.970170 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.983260 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:38 crc kubenswrapper[4827]: I0131 03:47:38.997274 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.020374 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:39Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.038823 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:39Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.056553 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"350d4093-0635-4787-bdf3-9b099c5a772b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a6d75598a153bb0745bb06aa53961f13fa3ff1fd4addb84b3ecdc66d07ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0070a036855a06f83caec5b6ee636f1a7a1e7f3246b779e0abbcf6aebf259a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5f30db796321372897f564fda3a031f0714999315931e2a77b21fde674f4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:39Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.064454 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.064748 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.065013 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.065290 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.065543 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:39Z","lastTransitionTime":"2026-01-31T03:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.085341 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 03:33:54.83698187 +0000 UTC Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.109945 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:39 crc kubenswrapper[4827]: E0131 03:47:39.110635 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.170094 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.170151 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.170170 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.170196 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.170216 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:39Z","lastTransitionTime":"2026-01-31T03:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.273169 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.273549 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.273697 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.273840 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.274035 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:39Z","lastTransitionTime":"2026-01-31T03:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.377209 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.377598 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.377737 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.377910 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.378075 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:39Z","lastTransitionTime":"2026-01-31T03:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.481327 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.481387 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.481412 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.481441 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.481462 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:39Z","lastTransitionTime":"2026-01-31T03:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.584466 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.584541 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.584566 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.584600 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.584626 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:39Z","lastTransitionTime":"2026-01-31T03:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.688011 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.688351 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.688513 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.688725 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.688871 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:39Z","lastTransitionTime":"2026-01-31T03:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.791811 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.791870 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.791916 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.791941 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.791958 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:39Z","lastTransitionTime":"2026-01-31T03:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.894861 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.895158 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.895293 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.895426 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.895593 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:39Z","lastTransitionTime":"2026-01-31T03:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.929595 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.929693 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:39 crc kubenswrapper[4827]: E0131 03:47:39.929752 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:48:11.929716982 +0000 UTC m=+84.616797471 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.929811 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.929874 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:39 crc kubenswrapper[4827]: E0131 03:47:39.929826 4827 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.929995 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs\") pod \"network-metrics-daemon-2shng\" (UID: \"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\") " pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:39 crc kubenswrapper[4827]: E0131 03:47:39.930017 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:11.930000191 +0000 UTC m=+84.617080670 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:47:39 crc kubenswrapper[4827]: E0131 03:47:39.929947 4827 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:47:39 crc kubenswrapper[4827]: E0131 03:47:39.930201 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:47:39 crc kubenswrapper[4827]: E0131 03:47:39.930213 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:47:39 crc kubenswrapper[4827]: E0131 03:47:39.930253 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.930062 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:39 crc kubenswrapper[4827]: E0131 03:47:39.930275 4827 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:39 crc kubenswrapper[4827]: E0131 03:47:39.930225 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:11.930174376 +0000 UTC m=+84.617254855 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:47:39 crc kubenswrapper[4827]: E0131 03:47:39.930455 4827 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:47:39 crc kubenswrapper[4827]: E0131 03:47:39.930234 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:47:39 crc kubenswrapper[4827]: E0131 03:47:39.930596 4827 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:39 crc kubenswrapper[4827]: E0131 03:47:39.930482 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:11.930453684 +0000 UTC m=+84.617534183 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:39 crc kubenswrapper[4827]: E0131 03:47:39.930685 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs podName:cf80ec31-1f83-4ed6-84e3-055cf9c88bff nodeName:}" failed. No retries permitted until 2026-01-31 03:47:55.93065453 +0000 UTC m=+68.617735009 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs") pod "network-metrics-daemon-2shng" (UID: "cf80ec31-1f83-4ed6-84e3-055cf9c88bff") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:47:39 crc kubenswrapper[4827]: E0131 03:47:39.930710 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:11.930696772 +0000 UTC m=+84.617777261 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.999116 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.999170 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.999187 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.999212 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:39 crc kubenswrapper[4827]: I0131 03:47:39.999229 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:39Z","lastTransitionTime":"2026-01-31T03:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.085791 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 09:11:50.809978032 +0000 UTC Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.102367 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.102415 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.102433 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.102457 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.102474 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:40Z","lastTransitionTime":"2026-01-31T03:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.109219 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.109251 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.109289 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:40 crc kubenswrapper[4827]: E0131 03:47:40.109435 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:40 crc kubenswrapper[4827]: E0131 03:47:40.109733 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:40 crc kubenswrapper[4827]: E0131 03:47:40.110048 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.205530 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.205586 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.205602 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.205628 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.205644 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:40Z","lastTransitionTime":"2026-01-31T03:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.309300 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.309361 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.309378 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.309400 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.309417 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:40Z","lastTransitionTime":"2026-01-31T03:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.412336 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.412395 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.412417 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.412446 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.412466 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:40Z","lastTransitionTime":"2026-01-31T03:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.515312 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.515372 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.515395 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.515422 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.515443 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:40Z","lastTransitionTime":"2026-01-31T03:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.618389 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.618460 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.618477 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.618502 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.618519 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:40Z","lastTransitionTime":"2026-01-31T03:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.722137 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.722489 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.722514 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.722543 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.722568 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:40Z","lastTransitionTime":"2026-01-31T03:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.827173 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.827248 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.827267 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.827294 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.827316 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:40Z","lastTransitionTime":"2026-01-31T03:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.930801 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.930864 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.930908 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.930933 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:40 crc kubenswrapper[4827]: I0131 03:47:40.930950 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:40Z","lastTransitionTime":"2026-01-31T03:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.034181 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.034270 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.034316 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.034350 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.034377 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:41Z","lastTransitionTime":"2026-01-31T03:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.085980 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:01:26.132894301 +0000 UTC Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.109770 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:41 crc kubenswrapper[4827]: E0131 03:47:41.110088 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.137398 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.137453 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.137471 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.137495 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.137515 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:41Z","lastTransitionTime":"2026-01-31T03:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.240907 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.240969 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.240984 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.241004 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.241018 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:41Z","lastTransitionTime":"2026-01-31T03:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.343558 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.343631 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.343646 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.343663 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.343703 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:41Z","lastTransitionTime":"2026-01-31T03:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.446463 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.446531 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.446554 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.446584 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.446602 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:41Z","lastTransitionTime":"2026-01-31T03:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.549349 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.549416 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.549434 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.549458 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.549475 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:41Z","lastTransitionTime":"2026-01-31T03:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.654135 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.654219 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.654260 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.654298 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.654319 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:41Z","lastTransitionTime":"2026-01-31T03:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.757465 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.757544 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.757565 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.757595 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.757619 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:41Z","lastTransitionTime":"2026-01-31T03:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.860841 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.860957 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.860976 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.861005 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.861022 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:41Z","lastTransitionTime":"2026-01-31T03:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.964172 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.964265 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.964282 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.964304 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:41 crc kubenswrapper[4827]: I0131 03:47:41.964323 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:41Z","lastTransitionTime":"2026-01-31T03:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.067566 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.067674 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.067692 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.067715 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.067736 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:42Z","lastTransitionTime":"2026-01-31T03:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.087178 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 11:34:54.886174283 +0000 UTC Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.109761 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.109822 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:42 crc kubenswrapper[4827]: E0131 03:47:42.109968 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.110034 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:42 crc kubenswrapper[4827]: E0131 03:47:42.110229 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:47:42 crc kubenswrapper[4827]: E0131 03:47:42.110363 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.170709 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.170766 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.170783 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.170806 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.170826 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:42Z","lastTransitionTime":"2026-01-31T03:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.274034 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.274093 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.274111 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.274134 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.274150 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:42Z","lastTransitionTime":"2026-01-31T03:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.377365 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.377433 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.377457 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.377488 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.377509 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:42Z","lastTransitionTime":"2026-01-31T03:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.480027 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.480101 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.480120 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.480145 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.480163 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:42Z","lastTransitionTime":"2026-01-31T03:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.582537 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.582632 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.582652 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.582704 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.582724 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:42Z","lastTransitionTime":"2026-01-31T03:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.686444 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.686509 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.686525 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.686550 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.686569 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:42Z","lastTransitionTime":"2026-01-31T03:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.789260 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.789311 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.789328 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.789351 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.789370 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:42Z","lastTransitionTime":"2026-01-31T03:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.892441 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.892490 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.892506 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.892530 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.892547 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:42Z","lastTransitionTime":"2026-01-31T03:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.996077 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.996131 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.996150 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.996177 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:42 crc kubenswrapper[4827]: I0131 03:47:42.996193 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:42Z","lastTransitionTime":"2026-01-31T03:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.087703 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:46:49.187063604 +0000 UTC Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.099302 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.099374 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.099396 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.099429 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.099453 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:43Z","lastTransitionTime":"2026-01-31T03:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.109130 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:43 crc kubenswrapper[4827]: E0131 03:47:43.109331 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.202346 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.202397 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.202415 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.202438 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.202471 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:43Z","lastTransitionTime":"2026-01-31T03:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.305569 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.305602 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.305612 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.305628 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.305639 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:43Z","lastTransitionTime":"2026-01-31T03:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.408228 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.408276 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.408293 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.408316 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.408333 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:43Z","lastTransitionTime":"2026-01-31T03:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.510821 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.510906 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.510935 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.510967 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.510988 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:43Z","lastTransitionTime":"2026-01-31T03:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.613724 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.613788 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.613806 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.613829 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.613847 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:43Z","lastTransitionTime":"2026-01-31T03:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.717069 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.717109 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.717119 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.717134 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.717143 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:43Z","lastTransitionTime":"2026-01-31T03:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.820468 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.820518 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.820529 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.820549 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.820560 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:43Z","lastTransitionTime":"2026-01-31T03:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.923015 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.923071 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.923088 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.923110 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:43 crc kubenswrapper[4827]: I0131 03:47:43.923128 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:43Z","lastTransitionTime":"2026-01-31T03:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.025124 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.025180 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.025195 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.025215 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.025230 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:44Z","lastTransitionTime":"2026-01-31T03:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.088676 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 15:46:24.940495437 +0000 UTC Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.109078 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.109145 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.109094 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:44 crc kubenswrapper[4827]: E0131 03:47:44.109211 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:47:44 crc kubenswrapper[4827]: E0131 03:47:44.109285 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:44 crc kubenswrapper[4827]: E0131 03:47:44.109509 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.127519 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.127573 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.127586 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.127600 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.127612 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:44Z","lastTransitionTime":"2026-01-31T03:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.230978 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.231017 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.231029 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.231062 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.231075 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:44Z","lastTransitionTime":"2026-01-31T03:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.333863 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.333943 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.333955 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.333971 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.334003 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:44Z","lastTransitionTime":"2026-01-31T03:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.436643 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.436711 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.436729 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.436755 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.436773 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:44Z","lastTransitionTime":"2026-01-31T03:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.539923 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.540001 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.540024 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.540054 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.540076 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:44Z","lastTransitionTime":"2026-01-31T03:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.642696 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.642754 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.642776 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.642800 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.642820 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:44Z","lastTransitionTime":"2026-01-31T03:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.745073 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.745138 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.745155 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.745179 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.745196 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:44Z","lastTransitionTime":"2026-01-31T03:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.847763 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.847833 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.847852 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.847877 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.847923 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:44Z","lastTransitionTime":"2026-01-31T03:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.951507 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.951553 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.951567 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.951586 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.951599 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:44Z","lastTransitionTime":"2026-01-31T03:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.989518 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.989571 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.989584 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.989606 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:44 crc kubenswrapper[4827]: I0131 03:47:44.989621 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:44Z","lastTransitionTime":"2026-01-31T03:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:45 crc kubenswrapper[4827]: E0131 03:47:45.009703 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.015694 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.015752 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.015770 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.015799 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.015820 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:45Z","lastTransitionTime":"2026-01-31T03:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:45 crc kubenswrapper[4827]: E0131 03:47:45.038650 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.044327 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.044419 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.044444 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.044960 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.045222 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:45Z","lastTransitionTime":"2026-01-31T03:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:45 crc kubenswrapper[4827]: E0131 03:47:45.066953 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.071819 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.071866 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.071919 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.071951 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.071973 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:45Z","lastTransitionTime":"2026-01-31T03:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.089272 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:40:05.074132434 +0000 UTC Jan 31 03:47:45 crc kubenswrapper[4827]: E0131 03:47:45.096395 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.102016 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.102121 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.102176 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.102211 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.102274 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:45Z","lastTransitionTime":"2026-01-31T03:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.109506 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:45 crc kubenswrapper[4827]: E0131 03:47:45.109668 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:45 crc kubenswrapper[4827]: E0131 03:47:45.124781 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:45 crc kubenswrapper[4827]: E0131 03:47:45.125039 4827 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.127049 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.127138 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.127160 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.127185 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.127233 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:45Z","lastTransitionTime":"2026-01-31T03:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.229309 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.229350 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.229362 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.229377 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.229388 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:45Z","lastTransitionTime":"2026-01-31T03:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.332442 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.332499 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.332516 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.332541 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.332558 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:45Z","lastTransitionTime":"2026-01-31T03:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.436111 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.436183 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.436201 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.436226 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.436248 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:45Z","lastTransitionTime":"2026-01-31T03:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.538265 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.538314 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.538329 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.538348 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.538362 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:45Z","lastTransitionTime":"2026-01-31T03:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.642302 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.642374 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.642395 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.642420 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.642438 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:45Z","lastTransitionTime":"2026-01-31T03:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.745740 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.745801 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.745820 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.745843 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.745860 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:45Z","lastTransitionTime":"2026-01-31T03:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.849239 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.849303 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.849320 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.849359 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.849378 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:45Z","lastTransitionTime":"2026-01-31T03:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.952626 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.952696 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.952713 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.952742 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:45 crc kubenswrapper[4827]: I0131 03:47:45.952760 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:45Z","lastTransitionTime":"2026-01-31T03:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.055336 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.055392 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.055415 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.055447 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.055472 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:46Z","lastTransitionTime":"2026-01-31T03:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.090153 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 09:47:48.513004516 +0000 UTC Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.109606 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.109635 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:46 crc kubenswrapper[4827]: E0131 03:47:46.109766 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.109849 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:46 crc kubenswrapper[4827]: E0131 03:47:46.110003 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:46 crc kubenswrapper[4827]: E0131 03:47:46.110165 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.157860 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.157957 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.157979 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.158009 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.158034 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:46Z","lastTransitionTime":"2026-01-31T03:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.266208 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.266257 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.266274 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.266297 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.266316 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:46Z","lastTransitionTime":"2026-01-31T03:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.369667 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.369794 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.369814 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.369864 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.369932 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:46Z","lastTransitionTime":"2026-01-31T03:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.473761 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.474042 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.474431 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.474717 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.475031 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:46Z","lastTransitionTime":"2026-01-31T03:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.578942 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.579008 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.579027 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.579054 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.579078 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:46Z","lastTransitionTime":"2026-01-31T03:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.683162 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.683262 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.683282 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.683659 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.684014 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:46Z","lastTransitionTime":"2026-01-31T03:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.788396 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.788478 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.788497 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.788527 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.788547 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:46Z","lastTransitionTime":"2026-01-31T03:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.891338 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.891386 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.891399 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.891417 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.891428 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:46Z","lastTransitionTime":"2026-01-31T03:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.994217 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.994291 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.994310 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.994340 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:46 crc kubenswrapper[4827]: I0131 03:47:46.994364 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:46Z","lastTransitionTime":"2026-01-31T03:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.091258 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 04:09:47.868814679 +0000 UTC Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.097552 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.097606 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.097628 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.097654 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.097675 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:47Z","lastTransitionTime":"2026-01-31T03:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.109816 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:47 crc kubenswrapper[4827]: E0131 03:47:47.110049 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.200732 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.200803 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.200824 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.200853 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.200873 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:47Z","lastTransitionTime":"2026-01-31T03:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.304347 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.304475 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.304495 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.304583 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.304664 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:47Z","lastTransitionTime":"2026-01-31T03:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.407875 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.407999 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.408023 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.408059 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.408084 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:47Z","lastTransitionTime":"2026-01-31T03:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.511485 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.511565 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.511584 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.511615 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.511640 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:47Z","lastTransitionTime":"2026-01-31T03:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.615005 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.615103 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.615127 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.615153 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.615172 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:47Z","lastTransitionTime":"2026-01-31T03:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.718233 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.718313 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.718335 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.718360 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.718380 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:47Z","lastTransitionTime":"2026-01-31T03:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.822339 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.822411 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.822429 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.822458 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.822476 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:47Z","lastTransitionTime":"2026-01-31T03:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.926056 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.926123 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.926145 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.926179 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:47 crc kubenswrapper[4827]: I0131 03:47:47.926203 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:47Z","lastTransitionTime":"2026-01-31T03:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.028582 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.028646 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.028664 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.028692 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.028712 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:48Z","lastTransitionTime":"2026-01-31T03:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.091868 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 02:44:31.212515025 +0000 UTC Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.110152 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.110192 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:48 crc kubenswrapper[4827]: E0131 03:47:48.110451 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.110186 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:48 crc kubenswrapper[4827]: E0131 03:47:48.110603 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:48 crc kubenswrapper[4827]: E0131 03:47:48.111391 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.128512 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:48Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.131026 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.131099 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.131112 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.131129 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.131142 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:48Z","lastTransitionTime":"2026-01-31T03:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.148740 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f81e67c9-6345-48e1-91e3-794421cb3fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e761c28f2d53c1a50d6ce7467012664d9d9d717222b6fc648c642ad97057113f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b967e5b38d5efa02b8cda05cf383b6aa9f24b819d055b86d12afe5290d6d5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l5njv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:48Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.164825 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2shng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2shng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:48Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.178812 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:48Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.192022 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:48Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.214836 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:48Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.228298 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:48Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.234079 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.234151 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.234174 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.234203 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.234225 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:48Z","lastTransitionTime":"2026-01-31T03:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.248439 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:48Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.267719 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:48Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.285045 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:48Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.306946 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:48Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.340050 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.340128 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.340153 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.340187 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.340213 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:48Z","lastTransitionTime":"2026-01-31T03:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.340129 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:37Z\\\",\\\"message\\\":\\\"o:140\\\\nI0131 03:47:37.117967 6495 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:47:37.117998 6495 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 03:47:37.118006 6495 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 03:47:37.118020 6495 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118032 6495 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:47:37.118044 6495 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:47:37.118054 6495 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:47:37.118258 6495 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118439 6495 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118523 6495 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118541 6495 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:47:37.118959 6495 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:48Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.361332 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"350d4093-0635-4787-bdf3-9b099c5a772b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a6d75598a153bb0745bb06aa53961f13fa3ff1fd4addb84b3ecdc66d07ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0070a036855a06f83caec5b6ee636f1a7a1e7f3246b779e0abbcf6aebf259a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5f30db796321372897f564fda3a031f0714999315931e2a77b21fde674f4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:48Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.382938 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:48Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.405765 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:48Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.435033 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:48Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.443127 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.443204 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.443222 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.443248 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.443269 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:48Z","lastTransitionTime":"2026-01-31T03:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.460532 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:48Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.545681 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.546096 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.546246 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.546381 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.546508 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:48Z","lastTransitionTime":"2026-01-31T03:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.652563 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.652624 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.652641 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.652669 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.652691 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:48Z","lastTransitionTime":"2026-01-31T03:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.756390 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.756460 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.756480 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.756506 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.756524 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:48Z","lastTransitionTime":"2026-01-31T03:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.860140 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.860594 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.860812 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.861043 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.861170 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:48Z","lastTransitionTime":"2026-01-31T03:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.964411 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.964557 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.964587 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.964621 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:48 crc kubenswrapper[4827]: I0131 03:47:48.964647 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:48Z","lastTransitionTime":"2026-01-31T03:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.068032 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.068101 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.068123 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.068155 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.068183 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:49Z","lastTransitionTime":"2026-01-31T03:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.093456 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 23:34:18.871691768 +0000 UTC Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.109825 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:49 crc kubenswrapper[4827]: E0131 03:47:49.110055 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.171282 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.171335 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.171346 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.171363 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.171374 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:49Z","lastTransitionTime":"2026-01-31T03:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.275176 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.275265 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.275285 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.275319 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.275340 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:49Z","lastTransitionTime":"2026-01-31T03:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.379757 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.379855 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.379912 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.379949 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.379979 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:49Z","lastTransitionTime":"2026-01-31T03:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.484576 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.484643 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.484660 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.484685 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.484709 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:49Z","lastTransitionTime":"2026-01-31T03:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.588271 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.588356 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.588383 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.588415 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.588435 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:49Z","lastTransitionTime":"2026-01-31T03:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.691329 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.691416 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.691565 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.691621 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.691647 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:49Z","lastTransitionTime":"2026-01-31T03:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.795534 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.795600 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.795620 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.795650 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.795672 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:49Z","lastTransitionTime":"2026-01-31T03:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.899177 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.899239 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.899258 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.899293 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:49 crc kubenswrapper[4827]: I0131 03:47:49.899314 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:49Z","lastTransitionTime":"2026-01-31T03:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.003279 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.003356 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.003375 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.003404 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.003423 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:50Z","lastTransitionTime":"2026-01-31T03:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.093602 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:47:33.028817395 +0000 UTC Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.107217 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.107643 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.107667 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.107700 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.107720 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:50Z","lastTransitionTime":"2026-01-31T03:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.109652 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.109839 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.110106 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:50 crc kubenswrapper[4827]: E0131 03:47:50.110245 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:50 crc kubenswrapper[4827]: E0131 03:47:50.110474 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:47:50 crc kubenswrapper[4827]: E0131 03:47:50.110096 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.211230 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.211291 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.211309 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.211334 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.211354 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:50Z","lastTransitionTime":"2026-01-31T03:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.314587 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.314667 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.314687 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.314710 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.314727 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:50Z","lastTransitionTime":"2026-01-31T03:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.418340 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.418409 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.418428 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.418457 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.418483 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:50Z","lastTransitionTime":"2026-01-31T03:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.522102 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.522169 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.522190 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.522220 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.522242 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:50Z","lastTransitionTime":"2026-01-31T03:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.625555 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.625635 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.625655 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.625682 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.625702 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:50Z","lastTransitionTime":"2026-01-31T03:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.729272 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.729318 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.729331 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.729350 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.729363 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:50Z","lastTransitionTime":"2026-01-31T03:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.833755 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.833811 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.833829 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.833855 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.833873 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:50Z","lastTransitionTime":"2026-01-31T03:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.937708 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.937778 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.937799 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.937829 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:50 crc kubenswrapper[4827]: I0131 03:47:50.937850 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:50Z","lastTransitionTime":"2026-01-31T03:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.041616 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.041708 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.041733 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.041764 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.041788 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:51Z","lastTransitionTime":"2026-01-31T03:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.094496 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 15:52:30.833181015 +0000 UTC Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.109930 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:51 crc kubenswrapper[4827]: E0131 03:47:51.110119 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.145815 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.145917 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.145940 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.145976 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.145997 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:51Z","lastTransitionTime":"2026-01-31T03:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.249704 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.249753 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.249769 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.249791 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.249807 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:51Z","lastTransitionTime":"2026-01-31T03:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.354337 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.354401 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.354418 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.354441 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.354458 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:51Z","lastTransitionTime":"2026-01-31T03:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.457513 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.457611 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.457634 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.457671 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.457693 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:51Z","lastTransitionTime":"2026-01-31T03:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.560009 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.560074 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.560100 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.560129 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.560150 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:51Z","lastTransitionTime":"2026-01-31T03:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.666148 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.666225 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.666252 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.666286 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.666309 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:51Z","lastTransitionTime":"2026-01-31T03:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.770224 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.770295 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.770319 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.770348 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.770403 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:51Z","lastTransitionTime":"2026-01-31T03:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.874198 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.874246 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.874263 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.874287 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.874307 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:51Z","lastTransitionTime":"2026-01-31T03:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.977987 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.978069 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.978092 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.978123 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:51 crc kubenswrapper[4827]: I0131 03:47:51.978147 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:51Z","lastTransitionTime":"2026-01-31T03:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.081647 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.081732 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.081757 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.081788 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.081809 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:52Z","lastTransitionTime":"2026-01-31T03:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.095624 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 02:33:30.347415237 +0000 UTC Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.110168 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.110168 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.110345 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:52 crc kubenswrapper[4827]: E0131 03:47:52.110562 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:52 crc kubenswrapper[4827]: E0131 03:47:52.110786 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:52 crc kubenswrapper[4827]: E0131 03:47:52.110960 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.339568 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.339697 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.339718 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.339744 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.339763 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:52Z","lastTransitionTime":"2026-01-31T03:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.442740 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.442787 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.442800 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.442817 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.442835 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:52Z","lastTransitionTime":"2026-01-31T03:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.546719 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.546782 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.546800 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.546828 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.546849 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:52Z","lastTransitionTime":"2026-01-31T03:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.652210 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.652273 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.652292 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.652322 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.652347 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:52Z","lastTransitionTime":"2026-01-31T03:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.754704 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.754738 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.754747 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.754761 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.754770 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:52Z","lastTransitionTime":"2026-01-31T03:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.856751 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.856778 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.856786 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.856797 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.856806 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:52Z","lastTransitionTime":"2026-01-31T03:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.959126 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.959170 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.959178 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.959191 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:52 crc kubenswrapper[4827]: I0131 03:47:52.959199 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:52Z","lastTransitionTime":"2026-01-31T03:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.061474 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.061588 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.061611 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.061649 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.061674 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:53Z","lastTransitionTime":"2026-01-31T03:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.096628 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 02:41:11.154818786 +0000 UTC Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.110184 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:53 crc kubenswrapper[4827]: E0131 03:47:53.110778 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.111145 4827 scope.go:117] "RemoveContainer" containerID="d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a" Jan 31 03:47:53 crc kubenswrapper[4827]: E0131 03:47:53.111507 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.165073 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.165125 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.165138 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.165161 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.165176 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:53Z","lastTransitionTime":"2026-01-31T03:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.268470 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.268542 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.268564 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.268594 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.268620 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:53Z","lastTransitionTime":"2026-01-31T03:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.371674 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.371717 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.371732 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.371752 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.371767 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:53Z","lastTransitionTime":"2026-01-31T03:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.474539 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.474587 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.474601 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.474619 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.474631 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:53Z","lastTransitionTime":"2026-01-31T03:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.577188 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.577233 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.577251 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.577273 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.577292 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:53Z","lastTransitionTime":"2026-01-31T03:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.679652 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.679715 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.679750 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.679783 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.679805 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:53Z","lastTransitionTime":"2026-01-31T03:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.783220 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.783253 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.783262 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.783276 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.783285 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:53Z","lastTransitionTime":"2026-01-31T03:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.885674 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.885715 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.885727 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.885745 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.885757 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:53Z","lastTransitionTime":"2026-01-31T03:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.987920 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.987961 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.987999 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.988019 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:53 crc kubenswrapper[4827]: I0131 03:47:53.988031 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:53Z","lastTransitionTime":"2026-01-31T03:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.090413 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.090453 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.090465 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.090484 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.090495 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:54Z","lastTransitionTime":"2026-01-31T03:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.096761 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:55:24.073537104 +0000 UTC Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.109082 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.109157 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:54 crc kubenswrapper[4827]: E0131 03:47:54.109201 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:47:54 crc kubenswrapper[4827]: E0131 03:47:54.109355 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.109567 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:54 crc kubenswrapper[4827]: E0131 03:47:54.109721 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.193641 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.193726 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.193744 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.193775 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.193796 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:54Z","lastTransitionTime":"2026-01-31T03:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.296236 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.296280 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.296289 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.296304 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.296313 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:54Z","lastTransitionTime":"2026-01-31T03:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.398986 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.399169 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.399188 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.399216 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.399237 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:54Z","lastTransitionTime":"2026-01-31T03:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.501610 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.501650 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.501662 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.501680 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.501694 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:54Z","lastTransitionTime":"2026-01-31T03:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.606913 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.606962 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.606976 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.607002 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.607015 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:54Z","lastTransitionTime":"2026-01-31T03:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.709687 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.709724 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.709741 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.709760 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.709773 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:54Z","lastTransitionTime":"2026-01-31T03:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.812359 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.812389 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.812401 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.812418 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.812428 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:54Z","lastTransitionTime":"2026-01-31T03:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.915266 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.915348 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.915375 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.915409 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:54 crc kubenswrapper[4827]: I0131 03:47:54.915433 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:54Z","lastTransitionTime":"2026-01-31T03:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.018684 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.018759 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.018780 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.018807 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.018826 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:55Z","lastTransitionTime":"2026-01-31T03:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.097035 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 19:07:56.889243407 +0000 UTC Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.109596 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:55 crc kubenswrapper[4827]: E0131 03:47:55.109776 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.121194 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.121230 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.121245 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.121261 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.121273 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:55Z","lastTransitionTime":"2026-01-31T03:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.224197 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.224258 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.224276 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.224304 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.224322 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:55Z","lastTransitionTime":"2026-01-31T03:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.326596 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.326663 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.326680 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.326705 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.326722 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:55Z","lastTransitionTime":"2026-01-31T03:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.404488 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.404529 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.404544 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.404559 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.404570 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:55Z","lastTransitionTime":"2026-01-31T03:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:55 crc kubenswrapper[4827]: E0131 03:47:55.422079 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:55Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.426686 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.426722 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.426735 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.426754 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.426767 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:55Z","lastTransitionTime":"2026-01-31T03:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:55 crc kubenswrapper[4827]: E0131 03:47:55.441150 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:55Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.445997 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.446058 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.446079 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.446105 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.446122 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:55Z","lastTransitionTime":"2026-01-31T03:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:55 crc kubenswrapper[4827]: E0131 03:47:55.460518 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:55Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.464277 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.464316 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.464328 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.464347 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.464359 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:55Z","lastTransitionTime":"2026-01-31T03:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:55 crc kubenswrapper[4827]: E0131 03:47:55.478419 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:55Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.482124 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.482163 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.482175 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.482190 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.482199 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:55Z","lastTransitionTime":"2026-01-31T03:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:55 crc kubenswrapper[4827]: E0131 03:47:55.496034 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:55Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:55 crc kubenswrapper[4827]: E0131 03:47:55.496251 4827 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.497734 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.497761 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.497770 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.497783 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.497793 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:55Z","lastTransitionTime":"2026-01-31T03:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.600531 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.600585 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.600605 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.600631 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.600652 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:55Z","lastTransitionTime":"2026-01-31T03:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.703021 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.703091 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.703116 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.703149 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.703175 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:55Z","lastTransitionTime":"2026-01-31T03:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.805729 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.805760 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.805773 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.805788 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.805801 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:55Z","lastTransitionTime":"2026-01-31T03:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.907997 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.908034 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.908043 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.908058 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.908069 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:55Z","lastTransitionTime":"2026-01-31T03:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:55 crc kubenswrapper[4827]: I0131 03:47:55.980988 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs\") pod \"network-metrics-daemon-2shng\" (UID: \"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\") " pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:55 crc kubenswrapper[4827]: E0131 03:47:55.981152 4827 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:47:55 crc kubenswrapper[4827]: E0131 03:47:55.981246 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs podName:cf80ec31-1f83-4ed6-84e3-055cf9c88bff nodeName:}" failed. No retries permitted until 2026-01-31 03:48:27.981221724 +0000 UTC m=+100.668302203 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs") pod "network-metrics-daemon-2shng" (UID: "cf80ec31-1f83-4ed6-84e3-055cf9c88bff") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.010592 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.010642 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.010654 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.010671 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.010683 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:56Z","lastTransitionTime":"2026-01-31T03:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.097757 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 00:56:17.893599279 +0000 UTC Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.109251 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.109321 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.109274 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:56 crc kubenswrapper[4827]: E0131 03:47:56.109457 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:56 crc kubenswrapper[4827]: E0131 03:47:56.109649 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:56 crc kubenswrapper[4827]: E0131 03:47:56.109780 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.113549 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.113594 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.113605 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.113622 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.113636 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:56Z","lastTransitionTime":"2026-01-31T03:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.217294 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.217367 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.217388 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.217418 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.217442 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:56Z","lastTransitionTime":"2026-01-31T03:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.321074 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.321119 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.321128 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.321144 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.321155 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:56Z","lastTransitionTime":"2026-01-31T03:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.423943 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.423988 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.423997 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.424013 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.424024 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:56Z","lastTransitionTime":"2026-01-31T03:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.526992 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.527068 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.527094 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.527126 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.527153 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:56Z","lastTransitionTime":"2026-01-31T03:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.630346 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.630409 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.630429 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.630454 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.630470 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:56Z","lastTransitionTime":"2026-01-31T03:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.732682 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.732717 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.732726 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.732739 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.732748 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:56Z","lastTransitionTime":"2026-01-31T03:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.835944 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.835979 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.835988 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.836001 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.836013 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:56Z","lastTransitionTime":"2026-01-31T03:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.939110 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.939151 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.939162 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.939176 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:56 crc kubenswrapper[4827]: I0131 03:47:56.939184 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:56Z","lastTransitionTime":"2026-01-31T03:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.041506 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.041555 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.041564 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.041577 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.041589 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:57Z","lastTransitionTime":"2026-01-31T03:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.098333 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:03:07.94250092 +0000 UTC Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.109615 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:57 crc kubenswrapper[4827]: E0131 03:47:57.109746 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.144524 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.144562 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.144572 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.144586 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.144596 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:57Z","lastTransitionTime":"2026-01-31T03:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.246805 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.246933 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.246958 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.247009 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.247033 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:57Z","lastTransitionTime":"2026-01-31T03:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.349661 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.349705 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.349715 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.349730 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.349740 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:57Z","lastTransitionTime":"2026-01-31T03:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.452456 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.452500 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.452510 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.452524 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.452533 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:57Z","lastTransitionTime":"2026-01-31T03:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.555286 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.555333 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.555346 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.555370 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.555382 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:57Z","lastTransitionTime":"2026-01-31T03:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.567735 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q9q8q_a696063c-4553-4032-8038-9900f09d4031/kube-multus/0.log" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.567791 4827 generic.go:334] "Generic (PLEG): container finished" podID="a696063c-4553-4032-8038-9900f09d4031" containerID="3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205" exitCode=1 Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.567825 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q9q8q" event={"ID":"a696063c-4553-4032-8038-9900f09d4031","Type":"ContainerDied","Data":"3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205"} Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.568270 4827 scope.go:117] "RemoveContainer" containerID="3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.582614 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.596736 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.613678 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.631758 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.645607 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.658163 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.658206 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.658223 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.658244 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.658258 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:57Z","lastTransitionTime":"2026-01-31T03:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.665421 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.692308 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:37Z\\\",\\\"message\\\":\\\"o:140\\\\nI0131 03:47:37.117967 6495 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:47:37.117998 6495 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 03:47:37.118006 6495 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 03:47:37.118020 6495 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118032 6495 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:47:37.118044 6495 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:47:37.118054 6495 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:47:37.118258 6495 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118439 6495 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118523 6495 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118541 6495 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:47:37.118959 6495 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.705217 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"350d4093-0635-4787-bdf3-9b099c5a772b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a6d75598a153bb0745bb06aa53961f13fa3ff1fd4addb84b3ecdc66d07ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0070a036855a06f83caec5b6ee636f1a7a1e7f3246b779e0abbcf6aebf259a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5f30db796321372897f564fda3a031f0714999315931e2a77b21fde674f4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.718978 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.733102 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:56Z\\\",\\\"message\\\":\\\"2026-01-31T03:47:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ed6be960-9431-4aca-83c0-58d0cb3d0931\\\\n2026-01-31T03:47:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ed6be960-9431-4aca-83c0-58d0cb3d0931 to /host/opt/cni/bin/\\\\n2026-01-31T03:47:11Z [verbose] multus-daemon started\\\\n2026-01-31T03:47:11Z [verbose] Readiness Indicator file check\\\\n2026-01-31T03:47:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.751938 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.760800 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.760843 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.760853 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.761280 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.761311 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:57Z","lastTransitionTime":"2026-01-31T03:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.773397 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.786113 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.799976 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f81e67c9-6345-48e1-91e3-794421cb3fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e761c28f2d53c1a50d6ce7467012664d9d9d717222b6fc648c642ad97057113f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b967e5b38d5efa02b8cda05cf383b6aa9f24b819d055b86d12afe5290d6d5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l5njv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.812332 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2shng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2shng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.825442 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.840192 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.864448 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.864494 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.864507 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.864523 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.864534 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:57Z","lastTransitionTime":"2026-01-31T03:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.966862 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.966914 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.966926 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.966942 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:57 crc kubenswrapper[4827]: I0131 03:47:57.966953 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:57Z","lastTransitionTime":"2026-01-31T03:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.068918 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.068961 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.068970 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.068988 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.068997 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:58Z","lastTransitionTime":"2026-01-31T03:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.099354 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:20:30.052939763 +0000 UTC Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.109919 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.110035 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:47:58 crc kubenswrapper[4827]: E0131 03:47:58.110122 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:47:58 crc kubenswrapper[4827]: E0131 03:47:58.110033 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.110156 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:47:58 crc kubenswrapper[4827]: E0131 03:47:58.110246 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.124021 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.144728 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:37Z\\\",\\\"message\\\":\\\"o:140\\\\nI0131 03:47:37.117967 6495 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:47:37.117998 6495 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 03:47:37.118006 6495 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 03:47:37.118020 6495 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118032 6495 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:47:37.118044 6495 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:47:37.118054 6495 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:47:37.118258 6495 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118439 6495 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118523 6495 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118541 6495 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:47:37.118959 6495 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.153905 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.164988 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:56Z\\\",\\\"message\\\":\\\"2026-01-31T03:47:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ed6be960-9431-4aca-83c0-58d0cb3d0931\\\\n2026-01-31T03:47:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ed6be960-9431-4aca-83c0-58d0cb3d0931 to /host/opt/cni/bin/\\\\n2026-01-31T03:47:11Z [verbose] multus-daemon started\\\\n2026-01-31T03:47:11Z [verbose] Readiness Indicator file check\\\\n2026-01-31T03:47:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.171262 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.171291 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.171303 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.171320 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.171333 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:58Z","lastTransitionTime":"2026-01-31T03:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.179432 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.192306 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.205660 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"350d4093-0635-4787-bdf3-9b099c5a772b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a6d75598a153bb0745bb06aa53961f13fa3ff1fd4addb84b3ecdc66d07ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0070a036855a06f83caec5b6ee636f1a7a1e7f3246b779e0abbcf6aebf259a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5f30db796321372897f564fda3a031f0714999315931e2a77b21fde674f4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.220081 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.236110 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.253697 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.265837 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.273656 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.273681 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.273690 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.273703 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.273713 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:58Z","lastTransitionTime":"2026-01-31T03:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.279309 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f81e67c9-6345-48e1-91e3-794421cb3fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e761c28f2d53c1a50d6ce7467012664d9d9d717222b6fc648c642ad97057113f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b967e5b38d5efa02b8cda05cf383b6aa9f24b819d055b86d12afe5290d6d5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l5njv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.291623 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2shng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2shng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.306852 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.325231 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.342903 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.355282 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.375869 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.375917 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.375929 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.375944 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.375955 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:58Z","lastTransitionTime":"2026-01-31T03:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.478017 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.478059 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.478067 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.478081 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.478091 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:58Z","lastTransitionTime":"2026-01-31T03:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.571795 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q9q8q_a696063c-4553-4032-8038-9900f09d4031/kube-multus/0.log" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.571848 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q9q8q" event={"ID":"a696063c-4553-4032-8038-9900f09d4031","Type":"ContainerStarted","Data":"b6b5307b128e69f814b56eb826859eb4b02d6645d1665c1f5d205492590135ef"} Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.579801 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.579840 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.579852 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.579869 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.579901 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:58Z","lastTransitionTime":"2026-01-31T03:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.589853 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.608407 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:37Z\\\",\\\"message\\\":\\\"o:140\\\\nI0131 03:47:37.117967 6495 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:47:37.117998 6495 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 03:47:37.118006 6495 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 03:47:37.118020 6495 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118032 6495 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:47:37.118044 6495 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:47:37.118054 6495 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:47:37.118258 6495 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118439 6495 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118523 6495 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118541 6495 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:47:37.118959 6495 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.618295 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.627581 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.640811 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6b5307b128e69f814b56eb826859eb4b02d6645d1665c1f5d205492590135ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:56Z\\\",\\\"message\\\":\\\"2026-01-31T03:47:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ed6be960-9431-4aca-83c0-58d0cb3d0931\\\\n2026-01-31T03:47:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ed6be960-9431-4aca-83c0-58d0cb3d0931 to /host/opt/cni/bin/\\\\n2026-01-31T03:47:11Z [verbose] multus-daemon started\\\\n2026-01-31T03:47:11Z [verbose] Readiness Indicator file check\\\\n2026-01-31T03:47:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.655600 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.668588 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.680110 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"350d4093-0635-4787-bdf3-9b099c5a772b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a6d75598a153bb0745bb06aa53961f13fa3ff1fd4addb84b3ecdc66d07ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0070a036855a06f83caec5b6ee636f1a7a1e7f3246b779e0abbcf6aebf259a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5f30db796321372897f564fda3a031f0714999315931e2a77b21fde674f4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.681335 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.681363 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.681376 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.681391 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.681404 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:58Z","lastTransitionTime":"2026-01-31T03:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.691970 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2shng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2shng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.706348 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.722649 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.733306 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.745190 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f81e67c9-6345-48e1-91e3-794421cb3fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e761c28f2d53c1a50d6ce7467012664d9d9d717222b6fc648c642ad97057113f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b967e5b38d5efa02b8cda05cf383b6aa9f24b819d055b86d12afe5290d6d5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l5njv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.756993 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.767864 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.781828 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.783602 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.783641 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.783652 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.783667 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.783677 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:58Z","lastTransitionTime":"2026-01-31T03:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.795025 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:47:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.885409 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.885461 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.885478 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.885497 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.885512 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:58Z","lastTransitionTime":"2026-01-31T03:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.987741 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.987781 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.987793 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.987807 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:58 crc kubenswrapper[4827]: I0131 03:47:58.987817 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:58Z","lastTransitionTime":"2026-01-31T03:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.090819 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.090930 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.090949 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.090979 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.091000 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:59Z","lastTransitionTime":"2026-01-31T03:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.099827 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:55:18.033400836 +0000 UTC Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.109280 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:47:59 crc kubenswrapper[4827]: E0131 03:47:59.109390 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.193491 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.193547 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.193556 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.193569 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.193580 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:59Z","lastTransitionTime":"2026-01-31T03:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.296535 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.296569 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.296579 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.296595 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.296605 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:59Z","lastTransitionTime":"2026-01-31T03:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.399979 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.400046 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.400058 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.400081 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.400098 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:59Z","lastTransitionTime":"2026-01-31T03:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.502560 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.502641 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.502654 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.502677 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.502693 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:59Z","lastTransitionTime":"2026-01-31T03:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.606678 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.606766 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.606787 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.606849 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.606869 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:59Z","lastTransitionTime":"2026-01-31T03:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.709897 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.709939 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.709949 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.709964 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.709974 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:59Z","lastTransitionTime":"2026-01-31T03:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.813755 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.813818 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.813838 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.813864 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.813978 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:59Z","lastTransitionTime":"2026-01-31T03:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.916338 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.916399 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.916413 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.916435 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:47:59 crc kubenswrapper[4827]: I0131 03:47:59.916450 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:47:59Z","lastTransitionTime":"2026-01-31T03:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.018833 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.018934 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.018956 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.018984 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.019002 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:00Z","lastTransitionTime":"2026-01-31T03:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.100820 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 09:06:35.939533779 +0000 UTC Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.109311 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.109395 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.109318 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:00 crc kubenswrapper[4827]: E0131 03:48:00.109540 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:00 crc kubenswrapper[4827]: E0131 03:48:00.109474 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:00 crc kubenswrapper[4827]: E0131 03:48:00.109778 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.121179 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.121254 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.121277 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.121306 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.121327 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:00Z","lastTransitionTime":"2026-01-31T03:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.224561 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.224638 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.224664 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.224699 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.224808 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:00Z","lastTransitionTime":"2026-01-31T03:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.328332 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.328392 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.328409 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.328434 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.328454 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:00Z","lastTransitionTime":"2026-01-31T03:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.431139 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.431214 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.431233 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.431266 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.431287 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:00Z","lastTransitionTime":"2026-01-31T03:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.533364 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.533414 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.533425 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.533444 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.533456 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:00Z","lastTransitionTime":"2026-01-31T03:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.635795 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.635920 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.635947 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.635984 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.636015 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:00Z","lastTransitionTime":"2026-01-31T03:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.738819 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.738909 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.738929 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.738952 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.738969 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:00Z","lastTransitionTime":"2026-01-31T03:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.842120 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.842175 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.842194 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.842217 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.842236 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:00Z","lastTransitionTime":"2026-01-31T03:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.945597 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.945664 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.945683 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.945709 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:00 crc kubenswrapper[4827]: I0131 03:48:00.945727 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:00Z","lastTransitionTime":"2026-01-31T03:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.048110 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.048142 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.048149 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.048161 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.048171 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:01Z","lastTransitionTime":"2026-01-31T03:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.101918 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 03:25:37.450583017 +0000 UTC Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.109526 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:01 crc kubenswrapper[4827]: E0131 03:48:01.109713 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.150620 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.150662 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.150672 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.150688 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.150699 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:01Z","lastTransitionTime":"2026-01-31T03:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.253972 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.254040 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.254055 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.254079 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.254095 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:01Z","lastTransitionTime":"2026-01-31T03:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.356811 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.356913 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.356932 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.356957 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.356979 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:01Z","lastTransitionTime":"2026-01-31T03:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.459793 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.459836 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.459847 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.459865 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.459894 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:01Z","lastTransitionTime":"2026-01-31T03:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.562313 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.562362 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.562370 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.562384 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.562393 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:01Z","lastTransitionTime":"2026-01-31T03:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.664841 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.664922 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.664941 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.664965 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.664981 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:01Z","lastTransitionTime":"2026-01-31T03:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.767517 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.767560 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.767569 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.767584 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.767593 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:01Z","lastTransitionTime":"2026-01-31T03:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.870179 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.870235 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.870252 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.870275 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.870297 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:01Z","lastTransitionTime":"2026-01-31T03:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.977347 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.977398 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.977415 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.977438 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:01 crc kubenswrapper[4827]: I0131 03:48:01.977455 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:01Z","lastTransitionTime":"2026-01-31T03:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.080612 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.080946 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.081170 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.081338 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.081498 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:02Z","lastTransitionTime":"2026-01-31T03:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.102833 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 06:15:48.773697346 +0000 UTC Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.109227 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:02 crc kubenswrapper[4827]: E0131 03:48:02.109522 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.109314 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:02 crc kubenswrapper[4827]: E0131 03:48:02.109801 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.109261 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:02 crc kubenswrapper[4827]: E0131 03:48:02.110919 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.184411 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.184445 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.184453 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.184467 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.184476 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:02Z","lastTransitionTime":"2026-01-31T03:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.287502 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.287537 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.287549 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.287565 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.287576 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:02Z","lastTransitionTime":"2026-01-31T03:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.389715 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.389776 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.389801 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.389829 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.389849 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:02Z","lastTransitionTime":"2026-01-31T03:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.492792 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.492836 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.492894 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.492914 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.492926 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:02Z","lastTransitionTime":"2026-01-31T03:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.595958 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.596074 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.596092 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.596116 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.596136 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:02Z","lastTransitionTime":"2026-01-31T03:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.699506 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.699577 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.699599 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.699628 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.699649 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:02Z","lastTransitionTime":"2026-01-31T03:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.802854 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.802996 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.803018 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.803043 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.803063 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:02Z","lastTransitionTime":"2026-01-31T03:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.905913 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.905952 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.905964 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.906250 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:02 crc kubenswrapper[4827]: I0131 03:48:02.906264 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:02Z","lastTransitionTime":"2026-01-31T03:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.009416 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.009470 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.009492 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.009517 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.009539 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:03Z","lastTransitionTime":"2026-01-31T03:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.103376 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 13:08:55.348182184 +0000 UTC Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.109760 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:03 crc kubenswrapper[4827]: E0131 03:48:03.109993 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.111626 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.111697 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.111716 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.111747 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.111797 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:03Z","lastTransitionTime":"2026-01-31T03:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.214348 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.214399 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.214417 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.214476 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.214498 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:03Z","lastTransitionTime":"2026-01-31T03:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.317602 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.317657 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.317678 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.317706 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.317726 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:03Z","lastTransitionTime":"2026-01-31T03:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.420944 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.420996 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.421015 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.421039 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.421057 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:03Z","lastTransitionTime":"2026-01-31T03:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.524229 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.524275 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.524288 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.524305 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.524316 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:03Z","lastTransitionTime":"2026-01-31T03:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.627055 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.627184 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.627204 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.627227 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.627243 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:03Z","lastTransitionTime":"2026-01-31T03:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.729997 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.730071 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.730089 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.730115 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.730137 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:03Z","lastTransitionTime":"2026-01-31T03:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.833132 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.833186 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.833195 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.833211 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.833219 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:03Z","lastTransitionTime":"2026-01-31T03:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.937115 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.937177 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.937201 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.937233 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:03 crc kubenswrapper[4827]: I0131 03:48:03.937256 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:03Z","lastTransitionTime":"2026-01-31T03:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.041160 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.041361 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.041390 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.041456 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.041481 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:04Z","lastTransitionTime":"2026-01-31T03:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.103679 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 12:51:08.620587871 +0000 UTC Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.109323 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.109388 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.109418 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:04 crc kubenswrapper[4827]: E0131 03:48:04.109542 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:04 crc kubenswrapper[4827]: E0131 03:48:04.109673 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:04 crc kubenswrapper[4827]: E0131 03:48:04.109823 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.143773 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.143834 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.143851 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.143874 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.143916 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:04Z","lastTransitionTime":"2026-01-31T03:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.247781 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.247850 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.247867 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.247924 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.247951 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:04Z","lastTransitionTime":"2026-01-31T03:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.350959 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.351002 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.351011 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.351027 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.351039 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:04Z","lastTransitionTime":"2026-01-31T03:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.453799 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.453837 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.453845 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.453860 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.453870 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:04Z","lastTransitionTime":"2026-01-31T03:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.557626 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.557684 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.557702 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.557727 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.557746 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:04Z","lastTransitionTime":"2026-01-31T03:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.661302 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.661366 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.661387 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.661412 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.661429 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:04Z","lastTransitionTime":"2026-01-31T03:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.764567 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.764614 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.764625 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.764640 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.764649 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:04Z","lastTransitionTime":"2026-01-31T03:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.868040 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.868120 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.868143 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.868176 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.868203 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:04Z","lastTransitionTime":"2026-01-31T03:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.972699 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.972764 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.972781 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.972811 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:04 crc kubenswrapper[4827]: I0131 03:48:04.972829 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:04Z","lastTransitionTime":"2026-01-31T03:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.076470 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.076534 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.076554 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.076577 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.076594 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:05Z","lastTransitionTime":"2026-01-31T03:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.104320 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:00:57.844955288 +0000 UTC Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.109753 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:05 crc kubenswrapper[4827]: E0131 03:48:05.109974 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.179580 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.179638 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.179654 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.179678 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.179694 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:05Z","lastTransitionTime":"2026-01-31T03:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.282960 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.283033 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.283058 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.283092 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.283117 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:05Z","lastTransitionTime":"2026-01-31T03:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.385687 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.385753 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.385771 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.385796 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.385815 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:05Z","lastTransitionTime":"2026-01-31T03:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.489200 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.489280 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.489303 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.489327 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.489348 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:05Z","lastTransitionTime":"2026-01-31T03:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.592350 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.592422 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.592449 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.592479 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.592497 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:05Z","lastTransitionTime":"2026-01-31T03:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.622471 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.622564 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.622582 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.622640 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.622660 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:05Z","lastTransitionTime":"2026-01-31T03:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:05 crc kubenswrapper[4827]: E0131 03:48:05.643608 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:05Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.649961 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.650197 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.650215 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.650238 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.650255 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:05Z","lastTransitionTime":"2026-01-31T03:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:05 crc kubenswrapper[4827]: E0131 03:48:05.672141 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:05Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.676967 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.677026 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.677046 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.677069 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.677085 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:05Z","lastTransitionTime":"2026-01-31T03:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:05 crc kubenswrapper[4827]: E0131 03:48:05.697485 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:05Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.702031 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.702083 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.702100 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.702122 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.702139 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:05Z","lastTransitionTime":"2026-01-31T03:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:05 crc kubenswrapper[4827]: E0131 03:48:05.722372 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:05Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.726953 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.727024 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.727042 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.727066 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.727086 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:05Z","lastTransitionTime":"2026-01-31T03:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:05 crc kubenswrapper[4827]: E0131 03:48:05.747598 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:05Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:05 crc kubenswrapper[4827]: E0131 03:48:05.747955 4827 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.752742 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.752823 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.752845 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.752909 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.752934 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:05Z","lastTransitionTime":"2026-01-31T03:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.855784 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.855856 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.855874 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.855926 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.855945 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:05Z","lastTransitionTime":"2026-01-31T03:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.958992 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.959060 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.959081 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.959107 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:05 crc kubenswrapper[4827]: I0131 03:48:05.959126 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:05Z","lastTransitionTime":"2026-01-31T03:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.062296 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.062344 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.062360 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.062381 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.062469 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:06Z","lastTransitionTime":"2026-01-31T03:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.104873 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 08:21:29.824945547 +0000 UTC Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.109346 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.109422 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:06 crc kubenswrapper[4827]: E0131 03:48:06.109532 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.109561 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:06 crc kubenswrapper[4827]: E0131 03:48:06.109782 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:06 crc kubenswrapper[4827]: E0131 03:48:06.109969 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.164492 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.164556 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.164579 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.164608 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.164684 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:06Z","lastTransitionTime":"2026-01-31T03:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.268461 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.268535 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.268559 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.268590 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.268614 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:06Z","lastTransitionTime":"2026-01-31T03:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.371639 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.371707 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.371732 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.371762 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.371784 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:06Z","lastTransitionTime":"2026-01-31T03:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.474387 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.474423 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.474435 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.474454 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.474467 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:06Z","lastTransitionTime":"2026-01-31T03:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.577392 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.577448 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.577469 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.577496 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.577517 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:06Z","lastTransitionTime":"2026-01-31T03:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.680533 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.680589 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.680606 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.680630 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.680646 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:06Z","lastTransitionTime":"2026-01-31T03:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.783133 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.783467 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.783626 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.783776 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.783963 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:06Z","lastTransitionTime":"2026-01-31T03:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.887432 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.887536 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.887585 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.887615 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.887634 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:06Z","lastTransitionTime":"2026-01-31T03:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.989725 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.989797 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.989820 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.989849 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:06 crc kubenswrapper[4827]: I0131 03:48:06.989871 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:06Z","lastTransitionTime":"2026-01-31T03:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.092497 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.092854 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.093139 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.093322 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.093461 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:07Z","lastTransitionTime":"2026-01-31T03:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.105066 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 22:35:48.785099274 +0000 UTC Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.109404 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:07 crc kubenswrapper[4827]: E0131 03:48:07.109761 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.196969 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.197241 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.197448 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.197650 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.197917 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:07Z","lastTransitionTime":"2026-01-31T03:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.301284 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.301376 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.301445 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.301472 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.301490 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:07Z","lastTransitionTime":"2026-01-31T03:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.403765 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.403918 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.403962 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.403992 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.404015 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:07Z","lastTransitionTime":"2026-01-31T03:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.507483 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.507535 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.507552 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.507574 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.507590 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:07Z","lastTransitionTime":"2026-01-31T03:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.610238 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.610301 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.610318 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.610342 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.610364 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:07Z","lastTransitionTime":"2026-01-31T03:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.713119 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.713187 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.713211 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.713242 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.713262 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:07Z","lastTransitionTime":"2026-01-31T03:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.815734 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.815798 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.815815 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.815840 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.815860 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:07Z","lastTransitionTime":"2026-01-31T03:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.919102 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.919203 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.919224 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.919247 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:07 crc kubenswrapper[4827]: I0131 03:48:07.919264 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:07Z","lastTransitionTime":"2026-01-31T03:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.022590 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.022654 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.022676 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.022708 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.022730 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:08Z","lastTransitionTime":"2026-01-31T03:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.106161 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:08:27.14095928 +0000 UTC Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.109563 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:08 crc kubenswrapper[4827]: E0131 03:48:08.109975 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.110045 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.110141 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:08 crc kubenswrapper[4827]: E0131 03:48:08.110210 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:08 crc kubenswrapper[4827]: E0131 03:48:08.110386 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.112117 4827 scope.go:117] "RemoveContainer" containerID="d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.125079 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.125125 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.125142 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.125164 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.125184 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:08Z","lastTransitionTime":"2026-01-31T03:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.135292 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.156046 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"350d4093-0635-4787-bdf3-9b099c5a772b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a6d75598a153bb0745bb06aa53961f13fa3ff1fd4addb84b3ecdc66d07ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0070a036855a06f83caec5b6ee636f1a7a1e7f3246b779e0abbcf6aebf259a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5f30db796321372897f564fda3a031f0714999315931e2a77b21fde674f4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.178221 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.201645 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6b5307b128e69f814b56eb826859eb4b02d6645d1665c1f5d205492590135ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:56Z\\\",\\\"message\\\":\\\"2026-01-31T03:47:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ed6be960-9431-4aca-83c0-58d0cb3d0931\\\\n2026-01-31T03:47:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ed6be960-9431-4aca-83c0-58d0cb3d0931 to /host/opt/cni/bin/\\\\n2026-01-31T03:47:11Z [verbose] multus-daemon started\\\\n2026-01-31T03:47:11Z [verbose] Readiness Indicator file check\\\\n2026-01-31T03:47:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.228052 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.228125 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.228150 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.228179 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.228201 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:08Z","lastTransitionTime":"2026-01-31T03:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.232735 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.259297 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.282063 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.299697 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.316449 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f81e67c9-6345-48e1-91e3-794421cb3fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e761c28f2d53c1a50d6ce7467012664d9d9d717222b6fc648c642ad97057113f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b967e5b38d5efa02b8cda05cf383b6aa9f24b819d055b86d12afe5290d6d5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l5njv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.331189 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2shng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2shng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.331789 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.331844 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.331868 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.331933 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.331970 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:08Z","lastTransitionTime":"2026-01-31T03:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.354863 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.375233 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.397406 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.418476 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.432893 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.435872 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.435972 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.436019 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.436038 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.436050 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:08Z","lastTransitionTime":"2026-01-31T03:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.463654 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:37Z\\\",\\\"message\\\":\\\"o:140\\\\nI0131 03:47:37.117967 6495 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:47:37.117998 6495 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 03:47:37.118006 6495 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 03:47:37.118020 6495 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118032 6495 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:47:37.118044 6495 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:47:37.118054 6495 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:47:37.118258 6495 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118439 6495 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118523 6495 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118541 6495 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:47:37.118959 6495 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.474485 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.538125 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.538182 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.538200 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.538226 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.538244 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:08Z","lastTransitionTime":"2026-01-31T03:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.610514 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hj2zw_da9e7773-a24b-4e8d-b479-97e2594db0d4/ovnkube-controller/2.log" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.614089 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerStarted","Data":"c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4"} Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.619374 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.638679 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.643297 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.643343 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.643359 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.643381 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.643397 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:08Z","lastTransitionTime":"2026-01-31T03:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.660988 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.677383 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"350d4093-0635-4787-bdf3-9b099c5a772b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a6d75598a153bb0745bb06aa53961f13fa3ff1fd4addb84b3ecdc66d07ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0070a036855a06f83caec5b6ee636f1a7a1e7f3246b779e0abbcf6aebf259a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5f30db796321372897f564fda3a031f0714999315931e2a77b21fde674f4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.703373 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.738466 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6b5307b128e69f814b56eb826859eb4b02d6645d1665c1f5d205492590135ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:56Z\\\",\\\"message\\\":\\\"2026-01-31T03:47:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ed6be960-9431-4aca-83c0-58d0cb3d0931\\\\n2026-01-31T03:47:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ed6be960-9431-4aca-83c0-58d0cb3d0931 to /host/opt/cni/bin/\\\\n2026-01-31T03:47:11Z [verbose] multus-daemon started\\\\n2026-01-31T03:47:11Z [verbose] Readiness Indicator file check\\\\n2026-01-31T03:47:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.745903 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.745934 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.745943 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.745958 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.745969 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:08Z","lastTransitionTime":"2026-01-31T03:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.766918 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.793465 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.804819 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.814903 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f81e67c9-6345-48e1-91e3-794421cb3fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e761c28f2d53c1a50d6ce7467012664d9d9d717222b6fc648c642ad97057113f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b967e5b38d5efa02b8cda05cf383b6aa9f24b819d055b86d12afe5290d6d5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l5njv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.825671 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2shng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2shng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.837636 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.848234 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.848257 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.848265 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.848279 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.848287 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:08Z","lastTransitionTime":"2026-01-31T03:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.850184 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.896388 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.915895 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.933948 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.952540 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.952575 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.952584 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.952602 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.952612 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:08Z","lastTransitionTime":"2026-01-31T03:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.958764 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:37Z\\\",\\\"message\\\":\\\"o:140\\\\nI0131 03:47:37.117967 6495 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:47:37.117998 6495 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 03:47:37.118006 6495 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 03:47:37.118020 6495 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118032 6495 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:47:37.118044 6495 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:47:37.118054 6495 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:47:37.118258 6495 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118439 6495 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118523 6495 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118541 6495 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:47:37.118959 6495 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:08 crc kubenswrapper[4827]: I0131 03:48:08.972718 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:08Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.054734 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.054767 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.054779 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.054796 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.054807 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:09Z","lastTransitionTime":"2026-01-31T03:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.107290 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 13:31:12.818804798 +0000 UTC Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.109636 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:09 crc kubenswrapper[4827]: E0131 03:48:09.109765 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.156926 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.156970 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.156978 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.156990 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.156999 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:09Z","lastTransitionTime":"2026-01-31T03:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.259542 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.259588 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.259603 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.259624 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.259645 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:09Z","lastTransitionTime":"2026-01-31T03:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.362451 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.362521 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.362560 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.362600 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.362624 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:09Z","lastTransitionTime":"2026-01-31T03:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.465612 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.465687 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.465715 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.465746 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.465767 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:09Z","lastTransitionTime":"2026-01-31T03:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.568419 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.568479 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.568501 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.568524 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.568541 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:09Z","lastTransitionTime":"2026-01-31T03:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.620597 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hj2zw_da9e7773-a24b-4e8d-b479-97e2594db0d4/ovnkube-controller/3.log" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.621580 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hj2zw_da9e7773-a24b-4e8d-b479-97e2594db0d4/ovnkube-controller/2.log" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.625861 4827 generic.go:334] "Generic (PLEG): container finished" podID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerID="c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4" exitCode=1 Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.625936 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerDied","Data":"c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4"} Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.625990 4827 scope.go:117] "RemoveContainer" containerID="d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.627042 4827 scope.go:117] "RemoveContainer" containerID="c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4" Jan 31 03:48:09 crc kubenswrapper[4827]: E0131 03:48:09.627428 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.647701 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.669782 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6b5307b128e69f814b56eb826859eb4b02d6645d1665c1f5d205492590135ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:56Z\\\",\\\"message\\\":\\\"2026-01-31T03:47:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ed6be960-9431-4aca-83c0-58d0cb3d0931\\\\n2026-01-31T03:47:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ed6be960-9431-4aca-83c0-58d0cb3d0931 to /host/opt/cni/bin/\\\\n2026-01-31T03:47:11Z [verbose] multus-daemon started\\\\n2026-01-31T03:47:11Z [verbose] Readiness Indicator file check\\\\n2026-01-31T03:47:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.672271 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.672338 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.672357 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.672390 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.672411 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:09Z","lastTransitionTime":"2026-01-31T03:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.695150 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.718836 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.738172 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"350d4093-0635-4787-bdf3-9b099c5a772b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a6d75598a153bb0745bb06aa53961f13fa3ff1fd4addb84b3ecdc66d07ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0070a036855a06f83caec5b6ee636f1a7a1e7f3246b779e0abbcf6aebf259a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5f30db796321372897f564fda3a031f0714999315931e2a77b21fde674f4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.756491 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f81e67c9-6345-48e1-91e3-794421cb3fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e761c28f2d53c1a50d6ce7467012664d9d9d717222b6fc648c642ad97057113f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b967e5b38d5efa02b8cda05cf383b6aa9f24b819d055b86d12afe5290d6d5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l5njv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.773646 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2shng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2shng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.775803 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.775857 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.775874 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.775920 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.775938 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:09Z","lastTransitionTime":"2026-01-31T03:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.793495 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.815291 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.832823 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.849997 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.867663 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.878610 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.878669 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.878691 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.878720 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.878741 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:09Z","lastTransitionTime":"2026-01-31T03:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.885698 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.903799 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.921115 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.952613 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34c4275238ceb97d664f71e8315fbfdff91be63b7babf108440f8f9e387173a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:37Z\\\",\\\"message\\\":\\\"o:140\\\\nI0131 03:47:37.117967 6495 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:47:37.117998 6495 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 03:47:37.118006 6495 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 03:47:37.118020 6495 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118032 6495 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:47:37.118044 6495 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:47:37.118054 6495 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:47:37.118258 6495 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118439 6495 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118523 6495 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 03:47:37.118541 6495 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:47:37.118959 6495 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:09Z\\\",\\\"message\\\":\\\"ns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 03:48:09.179127 6900 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 03:48:09.179620 6900 model_client.go:382] Update operations generated as: [{Op:update Table\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.968681 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:09Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.981593 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.981619 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.981628 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.981640 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:09 crc kubenswrapper[4827]: I0131 03:48:09.981649 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:09Z","lastTransitionTime":"2026-01-31T03:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.085093 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.085174 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.085203 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.085234 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.085256 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:10Z","lastTransitionTime":"2026-01-31T03:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.108347 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 03:30:22.280132623 +0000 UTC Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.109708 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.109769 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.109771 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:10 crc kubenswrapper[4827]: E0131 03:48:10.109947 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:10 crc kubenswrapper[4827]: E0131 03:48:10.110073 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:10 crc kubenswrapper[4827]: E0131 03:48:10.110220 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.188253 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.188306 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.188325 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.188348 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.188368 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:10Z","lastTransitionTime":"2026-01-31T03:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.291605 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.291667 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.291690 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.291719 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.291741 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:10Z","lastTransitionTime":"2026-01-31T03:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.395093 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.395164 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.395182 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.395206 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.395225 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:10Z","lastTransitionTime":"2026-01-31T03:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.498547 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.498609 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.498631 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.498664 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.498685 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:10Z","lastTransitionTime":"2026-01-31T03:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.602455 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.602519 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.602541 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.602571 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.602593 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:10Z","lastTransitionTime":"2026-01-31T03:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.632133 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hj2zw_da9e7773-a24b-4e8d-b479-97e2594db0d4/ovnkube-controller/3.log" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.637487 4827 scope.go:117] "RemoveContainer" containerID="c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4" Jan 31 03:48:10 crc kubenswrapper[4827]: E0131 03:48:10.637786 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.661045 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.692694 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:09Z\\\",\\\"message\\\":\\\"ns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 03:48:09.179127 6900 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 03:48:09.179620 6900 model_client.go:382] Update operations generated as: [{Op:update Table\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.706068 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.706121 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.706137 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.706160 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.706179 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:10Z","lastTransitionTime":"2026-01-31T03:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.711988 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.729464 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.764225 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6b5307b128e69f814b56eb826859eb4b02d6645d1665c1f5d205492590135ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:56Z\\\",\\\"message\\\":\\\"2026-01-31T03:47:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ed6be960-9431-4aca-83c0-58d0cb3d0931\\\\n2026-01-31T03:47:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ed6be960-9431-4aca-83c0-58d0cb3d0931 to /host/opt/cni/bin/\\\\n2026-01-31T03:47:11Z [verbose] multus-daemon started\\\\n2026-01-31T03:47:11Z [verbose] Readiness Indicator file check\\\\n2026-01-31T03:47:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.786746 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.809047 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.809127 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.809147 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.809180 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.809203 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:10Z","lastTransitionTime":"2026-01-31T03:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.810572 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.829876 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"350d4093-0635-4787-bdf3-9b099c5a772b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a6d75598a153bb0745bb06aa53961f13fa3ff1fd4addb84b3ecdc66d07ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0070a036855a06f83caec5b6ee636f1a7a1e7f3246b779e0abbcf6aebf259a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5f30db796321372897f564fda3a031f0714999315931e2a77b21fde674f4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.848457 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f81e67c9-6345-48e1-91e3-794421cb3fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e761c28f2d53c1a50d6ce7467012664d9d9d717222b6fc648c642ad97057113f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b967e5b38d5efa02b8cda05cf383b6aa9f24b819d055b86d12afe5290d6d5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l5njv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.866397 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2shng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2shng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.886798 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.905653 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.911453 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.911512 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.911535 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.911565 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.911588 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:10Z","lastTransitionTime":"2026-01-31T03:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.925538 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.944835 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.964345 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:10 crc kubenswrapper[4827]: I0131 03:48:10.984403 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.010387 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.015104 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.015156 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.015179 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.015208 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.015230 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:11Z","lastTransitionTime":"2026-01-31T03:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.109666 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 05:10:49.314620291 +0000 UTC Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.110395 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:11 crc kubenswrapper[4827]: E0131 03:48:11.110531 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.118111 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.118159 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.118176 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.118197 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.118212 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:11Z","lastTransitionTime":"2026-01-31T03:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.127910 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.221283 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.221336 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.221353 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.221378 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.221398 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:11Z","lastTransitionTime":"2026-01-31T03:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.323962 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.324019 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.324039 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.324064 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.324081 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:11Z","lastTransitionTime":"2026-01-31T03:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.426577 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.426629 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.426646 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.426671 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.426689 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:11Z","lastTransitionTime":"2026-01-31T03:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.530355 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.530715 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.530733 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.530757 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.530774 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:11Z","lastTransitionTime":"2026-01-31T03:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.634037 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.634109 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.634136 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.634168 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.634191 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:11Z","lastTransitionTime":"2026-01-31T03:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.737070 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.737131 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.737152 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.737178 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.737195 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:11Z","lastTransitionTime":"2026-01-31T03:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.839811 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.839873 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.839926 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.839957 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.839977 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:11Z","lastTransitionTime":"2026-01-31T03:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.943197 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.943252 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.943270 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.943297 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.943314 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:11Z","lastTransitionTime":"2026-01-31T03:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.958073 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.958180 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.958227 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:11 crc kubenswrapper[4827]: E0131 03:48:11.958318 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.958293203 +0000 UTC m=+148.645373682 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.958376 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:11 crc kubenswrapper[4827]: E0131 03:48:11.958384 4827 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:48:11 crc kubenswrapper[4827]: I0131 03:48:11.958432 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:11 crc kubenswrapper[4827]: E0131 03:48:11.958462 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:48:11 crc kubenswrapper[4827]: E0131 03:48:11.958520 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:48:11 crc kubenswrapper[4827]: E0131 03:48:11.958545 4827 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:11 crc kubenswrapper[4827]: E0131 03:48:11.958568 4827 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:48:11 crc kubenswrapper[4827]: E0131 03:48:11.958593 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:48:11 crc kubenswrapper[4827]: E0131 03:48:11.958676 4827 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:48:11 crc kubenswrapper[4827]: E0131 03:48:11.958696 4827 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:11 crc kubenswrapper[4827]: E0131 03:48:11.958492 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.958464378 +0000 UTC m=+148.645544927 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:48:11 crc kubenswrapper[4827]: E0131 03:48:11.958954 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.958868662 +0000 UTC m=+148.645949151 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:11 crc kubenswrapper[4827]: E0131 03:48:11.959025 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.959002017 +0000 UTC m=+148.646082566 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:48:11 crc kubenswrapper[4827]: E0131 03:48:11.959102 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.959085809 +0000 UTC m=+148.646166308 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.045965 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.046013 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.046044 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.046067 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.046077 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:12Z","lastTransitionTime":"2026-01-31T03:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.109823 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 04:57:00.899346415 +0000 UTC Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.109996 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.109997 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.110111 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:12 crc kubenswrapper[4827]: E0131 03:48:12.110272 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:12 crc kubenswrapper[4827]: E0131 03:48:12.110368 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:12 crc kubenswrapper[4827]: E0131 03:48:12.110528 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.148908 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.148958 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.148978 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.149003 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.149020 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:12Z","lastTransitionTime":"2026-01-31T03:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.252628 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.252684 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.252708 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.252737 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.252759 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:12Z","lastTransitionTime":"2026-01-31T03:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.355321 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.355733 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.355966 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.356165 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.356325 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:12Z","lastTransitionTime":"2026-01-31T03:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.460097 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.460180 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.460206 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.460235 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.460258 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:12Z","lastTransitionTime":"2026-01-31T03:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.563294 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.563376 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.563395 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.563421 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.563438 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:12Z","lastTransitionTime":"2026-01-31T03:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.667985 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.668032 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.668043 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.668061 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.668074 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:12Z","lastTransitionTime":"2026-01-31T03:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.770498 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.770551 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.770574 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.770602 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.770627 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:12Z","lastTransitionTime":"2026-01-31T03:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.873340 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.873403 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.873415 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.873435 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.873448 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:12Z","lastTransitionTime":"2026-01-31T03:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.976499 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.976540 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.976552 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.976568 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:12 crc kubenswrapper[4827]: I0131 03:48:12.976584 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:12Z","lastTransitionTime":"2026-01-31T03:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.079332 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.079380 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.079392 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.079410 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.079424 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:13Z","lastTransitionTime":"2026-01-31T03:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.108968 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:13 crc kubenswrapper[4827]: E0131 03:48:13.109143 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.110012 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 23:08:37.923170462 +0000 UTC Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.184023 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.184113 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.184139 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.184173 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.184210 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:13Z","lastTransitionTime":"2026-01-31T03:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.287976 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.288039 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.288057 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.288084 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.288102 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:13Z","lastTransitionTime":"2026-01-31T03:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.391968 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.392036 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.392054 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.392126 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.392148 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:13Z","lastTransitionTime":"2026-01-31T03:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.495163 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.495222 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.495237 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.495257 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.495270 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:13Z","lastTransitionTime":"2026-01-31T03:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.598332 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.598634 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.599099 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.599268 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.599417 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:13Z","lastTransitionTime":"2026-01-31T03:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.701963 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.702000 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.702009 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.702024 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.702033 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:13Z","lastTransitionTime":"2026-01-31T03:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.805574 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.805630 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.805643 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.805661 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.805673 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:13Z","lastTransitionTime":"2026-01-31T03:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.907958 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.908003 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.908018 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.908042 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:13 crc kubenswrapper[4827]: I0131 03:48:13.908058 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:13Z","lastTransitionTime":"2026-01-31T03:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.011984 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.012051 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.012077 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.012107 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.012131 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:14Z","lastTransitionTime":"2026-01-31T03:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.109746 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.109841 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.109762 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:14 crc kubenswrapper[4827]: E0131 03:48:14.110054 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.110120 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 16:55:14.981842417 +0000 UTC Jan 31 03:48:14 crc kubenswrapper[4827]: E0131 03:48:14.110197 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:14 crc kubenswrapper[4827]: E0131 03:48:14.110284 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.114871 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.114960 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.114988 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.115021 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.115045 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:14Z","lastTransitionTime":"2026-01-31T03:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.218198 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.218273 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.218294 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.218326 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.218346 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:14Z","lastTransitionTime":"2026-01-31T03:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.321551 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.321595 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.321606 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.321622 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.321634 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:14Z","lastTransitionTime":"2026-01-31T03:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.425426 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.425486 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.425510 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.425539 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.425557 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:14Z","lastTransitionTime":"2026-01-31T03:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.528610 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.528685 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.528707 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.528736 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.528757 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:14Z","lastTransitionTime":"2026-01-31T03:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.632538 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.632622 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.632642 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.632673 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.632699 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:14Z","lastTransitionTime":"2026-01-31T03:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.737682 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.737759 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.737778 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.737807 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.737826 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:14Z","lastTransitionTime":"2026-01-31T03:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.841052 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.841115 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.841133 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.841159 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.841176 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:14Z","lastTransitionTime":"2026-01-31T03:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.944509 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.944594 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.944612 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.944646 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:14 crc kubenswrapper[4827]: I0131 03:48:14.944669 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:14Z","lastTransitionTime":"2026-01-31T03:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.049113 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.049219 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.049237 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.049262 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.049279 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:15Z","lastTransitionTime":"2026-01-31T03:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.109647 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:15 crc kubenswrapper[4827]: E0131 03:48:15.109944 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.110323 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:28:54.639478262 +0000 UTC Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.152740 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.152812 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.152829 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.152856 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.152876 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:15Z","lastTransitionTime":"2026-01-31T03:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.256249 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.256337 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.256364 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.256393 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.256419 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:15Z","lastTransitionTime":"2026-01-31T03:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.360239 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.360351 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.360421 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.360448 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.360467 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:15Z","lastTransitionTime":"2026-01-31T03:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.464362 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.464453 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.464478 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.464515 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.464539 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:15Z","lastTransitionTime":"2026-01-31T03:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.567751 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.567828 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.567849 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.567877 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.567936 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:15Z","lastTransitionTime":"2026-01-31T03:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.670775 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.670869 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.670933 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.670967 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.670992 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:15Z","lastTransitionTime":"2026-01-31T03:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.774816 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.774912 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.774935 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.774965 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.774986 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:15Z","lastTransitionTime":"2026-01-31T03:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.877493 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.877552 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.877569 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.877592 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.877610 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:15Z","lastTransitionTime":"2026-01-31T03:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.980510 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.980579 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.980603 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.980632 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.980653 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:15Z","lastTransitionTime":"2026-01-31T03:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.990544 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.990607 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.990624 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.990648 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:15 crc kubenswrapper[4827]: I0131 03:48:15.990667 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:15Z","lastTransitionTime":"2026-01-31T03:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:16 crc kubenswrapper[4827]: E0131 03:48:16.011523 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:16Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.017926 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.017995 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.018019 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.018048 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.018066 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:16Z","lastTransitionTime":"2026-01-31T03:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:16 crc kubenswrapper[4827]: E0131 03:48:16.038379 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:16Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.044770 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.044868 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.044937 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.044966 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.045021 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:16Z","lastTransitionTime":"2026-01-31T03:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:16 crc kubenswrapper[4827]: E0131 03:48:16.066352 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:16Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.071496 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.071540 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.071583 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.071660 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.071683 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:16Z","lastTransitionTime":"2026-01-31T03:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:16 crc kubenswrapper[4827]: E0131 03:48:16.093383 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:16Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.098982 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.099037 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.099058 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.099083 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.099100 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:16Z","lastTransitionTime":"2026-01-31T03:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.109027 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.109044 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:16 crc kubenswrapper[4827]: E0131 03:48:16.109207 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.109376 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:16 crc kubenswrapper[4827]: E0131 03:48:16.109591 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:16 crc kubenswrapper[4827]: E0131 03:48:16.109668 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.110506 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 20:22:50.6183588 +0000 UTC Jan 31 03:48:16 crc kubenswrapper[4827]: E0131 03:48:16.120676 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:16Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:16 crc kubenswrapper[4827]: E0131 03:48:16.120962 4827 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.123448 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.123539 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.123567 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.123637 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.123674 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:16Z","lastTransitionTime":"2026-01-31T03:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.227587 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.227619 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.227630 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.227646 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.227657 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:16Z","lastTransitionTime":"2026-01-31T03:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.331524 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.331990 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.332149 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.332306 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.332458 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:16Z","lastTransitionTime":"2026-01-31T03:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.436386 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.436446 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.436466 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.436490 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.436508 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:16Z","lastTransitionTime":"2026-01-31T03:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.539478 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.539578 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.539626 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.539651 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.539667 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:16Z","lastTransitionTime":"2026-01-31T03:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.642715 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.642770 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.642796 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.642824 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.642846 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:16Z","lastTransitionTime":"2026-01-31T03:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.746219 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.746285 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.746303 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.746328 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.746346 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:16Z","lastTransitionTime":"2026-01-31T03:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.849418 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.849470 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.849487 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.849510 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.849526 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:16Z","lastTransitionTime":"2026-01-31T03:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.952610 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.952669 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.952693 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.952721 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:16 crc kubenswrapper[4827]: I0131 03:48:16.952742 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:16Z","lastTransitionTime":"2026-01-31T03:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.056751 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.056815 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.056840 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.056867 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.056921 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:17Z","lastTransitionTime":"2026-01-31T03:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.109159 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:17 crc kubenswrapper[4827]: E0131 03:48:17.109356 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.111382 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 18:43:56.045084638 +0000 UTC Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.160220 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.160272 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.160290 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.160311 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.160371 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:17Z","lastTransitionTime":"2026-01-31T03:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.263375 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.264307 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.264325 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.264349 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.264368 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:17Z","lastTransitionTime":"2026-01-31T03:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.367604 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.367671 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.367693 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.367720 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.367738 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:17Z","lastTransitionTime":"2026-01-31T03:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.471109 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.471164 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.471181 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.471206 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.471224 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:17Z","lastTransitionTime":"2026-01-31T03:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.574386 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.574537 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.574559 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.574584 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.574634 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:17Z","lastTransitionTime":"2026-01-31T03:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.677209 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.677282 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.677300 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.677324 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.677345 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:17Z","lastTransitionTime":"2026-01-31T03:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.780275 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.780315 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.780326 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.780346 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.780357 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:17Z","lastTransitionTime":"2026-01-31T03:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.884637 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.884699 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.884717 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.884741 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.884758 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:17Z","lastTransitionTime":"2026-01-31T03:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.987245 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.987308 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.987325 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.987347 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:17 crc kubenswrapper[4827]: I0131 03:48:17.987364 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:17Z","lastTransitionTime":"2026-01-31T03:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.092042 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.092434 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.092598 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.092760 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.093115 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:18Z","lastTransitionTime":"2026-01-31T03:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.109850 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.109930 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:18 crc kubenswrapper[4827]: E0131 03:48:18.110077 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.110087 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:18 crc kubenswrapper[4827]: E0131 03:48:18.110196 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:18 crc kubenswrapper[4827]: E0131 03:48:18.110318 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.112215 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 17:51:37.428077121 +0000 UTC Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.132674 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.153671 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.170446 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.187719 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f81e67c9-6345-48e1-91e3-794421cb3fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e761c28f2d53c1a50d6ce7467012664d9d9d717222b6fc648c642ad97057113f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b967e5b38d5efa02b8cda05cf383b6aa9f24b819d055b86d12afe5290d6d5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l5njv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.196818 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.196860 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.196877 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.196937 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.196954 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:18Z","lastTransitionTime":"2026-01-31T03:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.204645 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2shng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2shng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.226774 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.250037 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.268008 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.286857 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.300035 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.300165 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.300199 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.300230 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.300253 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:18Z","lastTransitionTime":"2026-01-31T03:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.308346 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.339977 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:09Z\\\",\\\"message\\\":\\\"ns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 03:48:09.179127 6900 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 03:48:09.179620 6900 model_client.go:382] Update operations generated as: [{Op:update Table\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.356296 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.375049 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.394612 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"350d4093-0635-4787-bdf3-9b099c5a772b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a6d75598a153bb0745bb06aa53961f13fa3ff1fd4addb84b3ecdc66d07ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0070a036855a06f83caec5b6ee636f1a7a1e7f3246b779e0abbcf6aebf259a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5f30db796321372897f564fda3a031f0714999315931e2a77b21fde674f4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.403515 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.403570 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.403589 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.403615 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.403634 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:18Z","lastTransitionTime":"2026-01-31T03:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.412382 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2ec0c36-2ace-4167-bc42-99fbd4ab34fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d529d76d3e5d8974e4c40a38910f933f23fcac20fc41884ff762a482e5d2ea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35cf3948fcca6a410837ad274e5f47621a1287617f9f126b0638fc41634b722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35cf3948fcca6a410837ad274e5f47621a1287617f9f126b0638fc41634b722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.430856 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.454099 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6b5307b128e69f814b56eb826859eb4b02d6645d1665c1f5d205492590135ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:56Z\\\",\\\"message\\\":\\\"2026-01-31T03:47:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ed6be960-9431-4aca-83c0-58d0cb3d0931\\\\n2026-01-31T03:47:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ed6be960-9431-4aca-83c0-58d0cb3d0931 to /host/opt/cni/bin/\\\\n2026-01-31T03:47:11Z [verbose] multus-daemon started\\\\n2026-01-31T03:47:11Z [verbose] Readiness Indicator file check\\\\n2026-01-31T03:47:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.478489 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.507223 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.507315 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.507334 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.507361 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.507378 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:18Z","lastTransitionTime":"2026-01-31T03:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.613450 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.613856 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.613869 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.613906 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.614574 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:18Z","lastTransitionTime":"2026-01-31T03:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.717765 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.717817 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.717829 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.717847 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.717893 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:18Z","lastTransitionTime":"2026-01-31T03:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.820127 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.820157 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.820166 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.820180 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.820188 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:18Z","lastTransitionTime":"2026-01-31T03:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.923849 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.923901 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.923917 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.923933 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:18 crc kubenswrapper[4827]: I0131 03:48:18.923944 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:18Z","lastTransitionTime":"2026-01-31T03:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.027258 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.027296 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.027308 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.027324 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.027336 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:19Z","lastTransitionTime":"2026-01-31T03:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.109341 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:19 crc kubenswrapper[4827]: E0131 03:48:19.109468 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.112696 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 10:33:01.855930084 +0000 UTC Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.130543 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.130583 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.130598 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.130616 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.130629 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:19Z","lastTransitionTime":"2026-01-31T03:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.234163 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.234210 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.234222 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.234240 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.234253 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:19Z","lastTransitionTime":"2026-01-31T03:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.336693 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.336741 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.336759 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.336782 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.336798 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:19Z","lastTransitionTime":"2026-01-31T03:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.438852 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.438952 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.438971 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.438999 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.439017 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:19Z","lastTransitionTime":"2026-01-31T03:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.541326 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.541387 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.541403 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.541428 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.541444 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:19Z","lastTransitionTime":"2026-01-31T03:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.644454 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.644520 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.644540 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.644566 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.644583 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:19Z","lastTransitionTime":"2026-01-31T03:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.747858 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.748378 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.748567 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.748742 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.748944 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:19Z","lastTransitionTime":"2026-01-31T03:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.851724 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.851800 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.851821 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.851845 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.851865 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:19Z","lastTransitionTime":"2026-01-31T03:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.954763 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.954825 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.954843 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.954868 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:19 crc kubenswrapper[4827]: I0131 03:48:19.954921 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:19Z","lastTransitionTime":"2026-01-31T03:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.057996 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.058079 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.058096 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.058120 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.058137 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:20Z","lastTransitionTime":"2026-01-31T03:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.109791 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:20 crc kubenswrapper[4827]: E0131 03:48:20.110087 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.110184 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.110363 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:20 crc kubenswrapper[4827]: E0131 03:48:20.110595 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:20 crc kubenswrapper[4827]: E0131 03:48:20.110390 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.113994 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:44:54.880968434 +0000 UTC Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.161130 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.161172 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.161187 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.161204 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.161219 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:20Z","lastTransitionTime":"2026-01-31T03:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.264387 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.264448 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.264465 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.264493 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.264550 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:20Z","lastTransitionTime":"2026-01-31T03:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.367207 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.367251 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.367270 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.367291 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.367308 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:20Z","lastTransitionTime":"2026-01-31T03:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.469965 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.470040 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.470065 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.470090 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.470111 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:20Z","lastTransitionTime":"2026-01-31T03:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.572503 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.572568 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.572589 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.572616 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.572636 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:20Z","lastTransitionTime":"2026-01-31T03:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.675958 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.676026 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.676043 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.676065 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.676081 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:20Z","lastTransitionTime":"2026-01-31T03:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.779157 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.779317 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.779344 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.779374 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.779395 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:20Z","lastTransitionTime":"2026-01-31T03:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.882819 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.882922 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.882942 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.882967 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.882987 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:20Z","lastTransitionTime":"2026-01-31T03:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.986266 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.986703 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.986957 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.987194 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:20 crc kubenswrapper[4827]: I0131 03:48:20.987403 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:20Z","lastTransitionTime":"2026-01-31T03:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.093339 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.093410 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.093428 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.093457 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.093476 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:21Z","lastTransitionTime":"2026-01-31T03:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.109727 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:21 crc kubenswrapper[4827]: E0131 03:48:21.109945 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.114965 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 13:49:00.260381859 +0000 UTC Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.196430 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.196503 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.196529 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.196563 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.196592 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:21Z","lastTransitionTime":"2026-01-31T03:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.299862 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.299968 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.299989 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.300011 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.300031 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:21Z","lastTransitionTime":"2026-01-31T03:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.403459 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.404068 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.404246 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.404428 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.404559 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:21Z","lastTransitionTime":"2026-01-31T03:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.508937 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.509425 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.509565 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.509723 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.509925 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:21Z","lastTransitionTime":"2026-01-31T03:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.613359 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.613428 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.613447 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.613473 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.613492 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:21Z","lastTransitionTime":"2026-01-31T03:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.716228 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.716313 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.716332 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.716786 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.716838 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:21Z","lastTransitionTime":"2026-01-31T03:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.820011 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.820076 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.820092 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.820116 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.820133 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:21Z","lastTransitionTime":"2026-01-31T03:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.923654 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.923708 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.923726 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.923751 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:21 crc kubenswrapper[4827]: I0131 03:48:21.923771 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:21Z","lastTransitionTime":"2026-01-31T03:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.026687 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.026773 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.026799 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.026827 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.026852 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.109432 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.109525 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:22 crc kubenswrapper[4827]: E0131 03:48:22.109560 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:22 crc kubenswrapper[4827]: E0131 03:48:22.109687 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.110198 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:22 crc kubenswrapper[4827]: E0131 03:48:22.110322 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.115835 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 16:28:30.81656998 +0000 UTC Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.129692 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.129728 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.129739 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.129754 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.129767 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.130631 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.233345 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.233408 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.233428 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.233454 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.233471 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.336376 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.336446 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.336464 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.336494 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.336516 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.439223 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.439320 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.439339 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.439362 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.439416 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.542552 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.542622 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.542650 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.542673 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.542690 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.645923 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.646010 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.646032 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.646065 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.646090 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.751290 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.751364 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.751387 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.751421 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.751440 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.853967 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.854038 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.854055 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.854080 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.854099 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.957717 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.957815 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.957844 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.957916 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4827]: I0131 03:48:22.957942 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.061874 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.061987 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.062008 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.062034 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.062056 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:23Z","lastTransitionTime":"2026-01-31T03:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.109299 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:23 crc kubenswrapper[4827]: E0131 03:48:23.109730 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.116503 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 14:55:56.394164137 +0000 UTC Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.165554 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.165598 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.165614 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.165637 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.165655 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:23Z","lastTransitionTime":"2026-01-31T03:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.268320 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.268379 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.268396 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.268420 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.268438 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:23Z","lastTransitionTime":"2026-01-31T03:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.370382 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.370440 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.370451 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.370468 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.370479 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:23Z","lastTransitionTime":"2026-01-31T03:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.473233 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.473305 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.473325 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.473354 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.473375 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:23Z","lastTransitionTime":"2026-01-31T03:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.576599 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.576660 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.576675 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.576698 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.576717 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:23Z","lastTransitionTime":"2026-01-31T03:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.680048 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.680103 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.680119 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.680141 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.680159 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:23Z","lastTransitionTime":"2026-01-31T03:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.783763 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.783821 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.783840 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.783864 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.783922 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:23Z","lastTransitionTime":"2026-01-31T03:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.886289 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.886348 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.886365 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.886391 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.886408 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:23Z","lastTransitionTime":"2026-01-31T03:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.988987 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.989037 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.989054 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.989080 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:23 crc kubenswrapper[4827]: I0131 03:48:23.989097 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:23Z","lastTransitionTime":"2026-01-31T03:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.091919 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.091962 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.091978 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.092012 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.092029 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:24Z","lastTransitionTime":"2026-01-31T03:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.110222 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.110305 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.110345 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:24 crc kubenswrapper[4827]: E0131 03:48:24.110494 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:24 crc kubenswrapper[4827]: E0131 03:48:24.110650 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:24 crc kubenswrapper[4827]: E0131 03:48:24.111493 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.112029 4827 scope.go:117] "RemoveContainer" containerID="c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4" Jan 31 03:48:24 crc kubenswrapper[4827]: E0131 03:48:24.112298 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.117029 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 09:13:50.020773321 +0000 UTC Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.195343 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.195406 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.195424 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.195446 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.195463 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:24Z","lastTransitionTime":"2026-01-31T03:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.298843 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.298947 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.298969 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.298992 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.299009 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:24Z","lastTransitionTime":"2026-01-31T03:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.402989 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.403052 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.403077 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.403106 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.403127 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:24Z","lastTransitionTime":"2026-01-31T03:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.506604 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.506661 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.506683 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.506713 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.506733 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:24Z","lastTransitionTime":"2026-01-31T03:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.610377 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.610454 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.610481 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.610510 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.610528 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:24Z","lastTransitionTime":"2026-01-31T03:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.713005 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.713075 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.713095 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.713120 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.713141 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:24Z","lastTransitionTime":"2026-01-31T03:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.815371 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.815413 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.815424 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.815441 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.815455 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:24Z","lastTransitionTime":"2026-01-31T03:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.918490 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.918527 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.918538 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.918555 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:24 crc kubenswrapper[4827]: I0131 03:48:24.918565 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:24Z","lastTransitionTime":"2026-01-31T03:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.021735 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.021799 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.021820 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.021845 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.021864 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:25Z","lastTransitionTime":"2026-01-31T03:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.109529 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:25 crc kubenswrapper[4827]: E0131 03:48:25.109998 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.117629 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 00:07:53.578929242 +0000 UTC Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.125026 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.125087 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.125106 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.125128 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.125146 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:25Z","lastTransitionTime":"2026-01-31T03:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.227952 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.228013 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.228036 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.228066 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.228092 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:25Z","lastTransitionTime":"2026-01-31T03:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.331404 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.331497 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.331515 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.331539 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.331555 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:25Z","lastTransitionTime":"2026-01-31T03:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.434546 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.434618 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.434636 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.434659 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.434677 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:25Z","lastTransitionTime":"2026-01-31T03:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.538456 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.538541 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.538561 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.538587 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.538606 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:25Z","lastTransitionTime":"2026-01-31T03:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.641568 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.641652 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.641678 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.641711 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.641738 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:25Z","lastTransitionTime":"2026-01-31T03:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.745381 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.745448 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.745466 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.745492 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.745509 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:25Z","lastTransitionTime":"2026-01-31T03:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.848744 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.848787 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.848800 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.848817 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.848830 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:25Z","lastTransitionTime":"2026-01-31T03:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.951918 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.952322 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.952513 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.952709 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:25 crc kubenswrapper[4827]: I0131 03:48:25.952947 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:25Z","lastTransitionTime":"2026-01-31T03:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.056386 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.056458 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.056483 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.056515 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.056538 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.109560 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.109669 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:26 crc kubenswrapper[4827]: E0131 03:48:26.109780 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.109874 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:26 crc kubenswrapper[4827]: E0131 03:48:26.110076 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:26 crc kubenswrapper[4827]: E0131 03:48:26.110247 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.118133 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 15:50:21.665775445 +0000 UTC Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.159682 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.159738 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.159755 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.159777 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.159795 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.262685 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.262747 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.262769 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.262798 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.262819 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.365991 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.366058 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.366081 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.366108 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.366125 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.379311 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.379451 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.379473 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.379498 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.379518 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4827]: E0131 03:48:26.400566 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.406567 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.407249 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.407283 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.407311 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.407327 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4827]: E0131 03:48:26.424772 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.430014 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.430083 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.430098 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.430116 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.430128 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4827]: E0131 03:48:26.445541 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.450680 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.450735 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.450752 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.450777 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.450795 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4827]: E0131 03:48:26.464716 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.470268 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.470331 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.470350 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.470376 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.470394 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4827]: E0131 03:48:26.489860 4827 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0f4c9cc6-4be7-45e8-ab30-471f307c1c16\\\",\\\"systemUUID\\\":\\\"9b087aa6-4510-46ee-bc39-2317e4ea4d1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4827]: E0131 03:48:26.490091 4827 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.491822 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.491867 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.491905 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.491923 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.491940 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.595874 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.595960 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.595977 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.596004 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.596023 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.699477 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.699554 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.699573 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.699605 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.699625 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.803296 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.803361 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.803378 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.803406 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.803424 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.906352 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.906418 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.906436 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.906462 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4827]: I0131 03:48:26.906482 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.009358 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.009399 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.009407 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.009423 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.009432 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:27Z","lastTransitionTime":"2026-01-31T03:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.109711 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:27 crc kubenswrapper[4827]: E0131 03:48:27.109916 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.111774 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.111847 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.111871 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.111929 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.111951 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:27Z","lastTransitionTime":"2026-01-31T03:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.119036 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 08:10:17.248866574 +0000 UTC Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.215421 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.215504 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.215528 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.215556 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.215576 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:27Z","lastTransitionTime":"2026-01-31T03:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.319646 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.319702 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.319720 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.319743 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.319762 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:27Z","lastTransitionTime":"2026-01-31T03:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.422706 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.422761 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.422772 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.422794 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.422804 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:27Z","lastTransitionTime":"2026-01-31T03:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.526657 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.526705 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.526715 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.526737 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.526749 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:27Z","lastTransitionTime":"2026-01-31T03:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.629849 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.629981 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.630007 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.630046 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.630068 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:27Z","lastTransitionTime":"2026-01-31T03:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.733361 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.733430 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.733449 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.733476 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.733494 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:27Z","lastTransitionTime":"2026-01-31T03:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.837184 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.837259 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.837282 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.837312 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.837331 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:27Z","lastTransitionTime":"2026-01-31T03:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.940624 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.940691 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.940709 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.940736 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:27 crc kubenswrapper[4827]: I0131 03:48:27.940758 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:27Z","lastTransitionTime":"2026-01-31T03:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.045230 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.045297 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.045317 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.045347 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.045365 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:28Z","lastTransitionTime":"2026-01-31T03:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.069178 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs\") pod \"network-metrics-daemon-2shng\" (UID: \"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\") " pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:28 crc kubenswrapper[4827]: E0131 03:48:28.069379 4827 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:48:28 crc kubenswrapper[4827]: E0131 03:48:28.069495 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs podName:cf80ec31-1f83-4ed6-84e3-055cf9c88bff nodeName:}" failed. No retries permitted until 2026-01-31 03:49:32.069468038 +0000 UTC m=+164.756548527 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs") pod "network-metrics-daemon-2shng" (UID: "cf80ec31-1f83-4ed6-84e3-055cf9c88bff") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.109979 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.110190 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.110231 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:28 crc kubenswrapper[4827]: E0131 03:48:28.110392 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:28 crc kubenswrapper[4827]: E0131 03:48:28.110581 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:28 crc kubenswrapper[4827]: E0131 03:48:28.110752 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.119462 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 00:55:21.671928814 +0000 UTC Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.135509 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65119775a6ab2e8f81a0443d8093e0570c2c49a1b081b7b694780d07eba7ef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdf29d063e0940532d696d183771e56ce37b7b75f82548bf7ff5165ab20ccff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.149210 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.149270 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.149289 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.149315 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.149333 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:28Z","lastTransitionTime":"2026-01-31T03:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.166506 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da9e7773-a24b-4e8d-b479-97e2594db0d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:09Z\\\",\\\"message\\\":\\\"ns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 03:48:09.179127 6900 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 03:48:09.179620 6900 model_client.go:382] Update operations generated as: [{Op:update Table\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mt4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hj2zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.185079 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cl9c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77746487-d08f-4da6-82a3-bc7d8845841a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5098b5f6a44c89953722806e420dc66eaad66f11faf1c5b526bfc8ebf87364d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cl9c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.202565 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2ec0c36-2ace-4167-bc42-99fbd4ab34fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d529d76d3e5d8974e4c40a38910f933f23fcac20fc41884ff762a482e5d2ea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35cf3948fcca6a410837ad274e5f47621a1287617f9f126b0638fc41634b722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35cf3948fcca6a410837ad274e5f47621a1287617f9f126b0638fc41634b722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.218111 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63dbb73-e1a2-4796-83c5-2a88e55566b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bf7844256848a99420ff2a52724a942bd9ee07e0e587b818429ac544865232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94gkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.236527 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q9q8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a696063c-4553-4032-8038-9900f09d4031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6b5307b128e69f814b56eb826859eb4b02d6645d1665c1f5d205492590135ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:47:56Z\\\",\\\"message\\\":\\\"2026-01-31T03:47:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ed6be960-9431-4aca-83c0-58d0cb3d0931\\\\n2026-01-31T03:47:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ed6be960-9431-4aca-83c0-58d0cb3d0931 to /host/opt/cni/bin/\\\\n2026-01-31T03:47:11Z [verbose] multus-daemon started\\\\n2026-01-31T03:47:11Z [verbose] Readiness Indicator file check\\\\n2026-01-31T03:47:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckwx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q9q8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.252142 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.252183 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.252193 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.252212 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.252224 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:28Z","lastTransitionTime":"2026-01-31T03:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.262124 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5dbff7a-4ed0-4c17-bd01-1888199225b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e9dfa02cc1d2d5e655654c4d731363a32dec4d67423160996c24260e04a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c615ec47f98d56124c11d46f6a3875b5718974234f752615411e14b5d8913dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9d2a6506fd6b67003ff94a906cb0c74a7eefa9f389d5fe3929fcb743b877f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b9fd1e8570701b512508eb10490fa6c4e5fdc63b7fdb0c4810896a46be65a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b944c8ab4e6079ae5a7a5c52f9b336fc71aa3273ed087499ffa1cc5396053660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac07d8c3c396081f637968cd1c64525967cd198273719be0af7bee01d35fb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e43eb316b4b459d9232b72620b1892bf11a896f007d5c879381d8349aa1178\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gjc5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.290558 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1eb7ba1d-78ae-48cd-93b8-6bb6fb283726\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6326f7e45db29abaa4d868250cbaee4f7c0253cf1d97a1fa52342c889e7cce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f15aab7eadd5e66325daed9c1b711867547bb0e7f7a66a625333020771c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4c0d8f3e2e4e5aa0d4856dadd352d9d7492f5d95bb15f65752c0c47ff66fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92df2693729c1dd3423580206df4937a7e955514f7403a844700f88c6522457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6eb640d651ffa7c4ea6ebf12f3e5ed25c8c3ed6a266bd7d00ced163351fdc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d9cbfbbaa1a57bc880c1eadef877e52c66b8645127830242a33ad4fe9eecad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d9cbfbbaa1a57bc880c1eadef877e52c66b8645127830242a33ad4fe9eecad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a0f079ed201b286f30a3febf53506456336b1b3393890ca39917e49adca571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a0f079ed201b286f30a3febf53506456336b1b3393890ca39917e49adca571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0b40907772b53aa24e77d0da09972ea1ebf4bfb3451bd294d7071e477878b8c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b40907772b53aa24e77d0da09972ea1ebf4bfb3451bd294d7071e477878b8c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.315265 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4273172d-ec24-4540-85cb-efc58aed3421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:47:02Z\\\",\\\"message\\\":\\\"W0131 03:46:51.337568 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 03:46:51.338014 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769831211 cert, and key in /tmp/serving-cert-4099090867/serving-signer.crt, /tmp/serving-cert-4099090867/serving-signer.key\\\\nI0131 03:46:51.617433 1 observer_polling.go:159] Starting file observer\\\\nW0131 03:46:51.621205 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 03:46:51.621653 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:46:51.625463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4099090867/tls.crt::/tmp/serving-cert-4099090867/tls.key\\\\\\\"\\\\nF0131 03:47:02.085590 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.336159 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"350d4093-0635-4787-bdf3-9b099c5a772b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a6d75598a153bb0745bb06aa53961f13fa3ff1fd4addb84b3ecdc66d07ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0070a036855a06f83caec5b6ee636f1a7a1e7f3246b779e0abbcf6aebf259a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5f30db796321372897f564fda3a031f0714999315931e2a77b21fde674f4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f17307ab85479e60722f1f8484fb415c3609584ed9a20ebf17677a86558dc07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:46:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.357530 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f81e67c9-6345-48e1-91e3-794421cb3fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e761c28f2d53c1a50d6ce7467012664d9d9d717222b6fc648c642ad97057113f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b967e5b38d5efa02b8cda05cf383b6aa9f24b819d055b86d12afe5290d6d5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rblt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l5njv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.357763 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.357804 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.357822 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.357849 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.357868 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:28Z","lastTransitionTime":"2026-01-31T03:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.374372 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2shng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjlkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2shng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.394928 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.414300 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.428844 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7v8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10db775-d306-4f15-97dd-b1dfed7c89e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc16c20edbd99491eea6e34116e6014fd587c40800c87bff9643e9c0cb6e185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ch9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7v8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.443219 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955a3f09f24522656e31cfabe08613a5129fe3d1d0693756697e3ef0469e062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.462459 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.462553 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.462578 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.462613 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.462640 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:28Z","lastTransitionTime":"2026-01-31T03:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.466190 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64294dab-2843-4893-8b19-59f3ae404e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff49646239dea28804978f1b3b721dcc9454572ff2301d0c92e146dacb2a3e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a7c7196fc77ac970ea3114b593db0c9bef943c760526721ab5492975884237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce197f7aa8a7bebdecdbd760e9029786331bd9497f724b0a57f57278a5b1b04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:46:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.487628 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.508466 4827 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0fbdd57c20e0d5bda95b44f22f117b07306ef48d75d372d958559048474af9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.565873 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.566037 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.566064 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.566091 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.566110 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:28Z","lastTransitionTime":"2026-01-31T03:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.670011 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.670087 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.670111 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.670145 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.670172 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:28Z","lastTransitionTime":"2026-01-31T03:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.773832 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.774393 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.774566 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.774742 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.774967 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:28Z","lastTransitionTime":"2026-01-31T03:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.878293 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.878388 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.878456 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.878498 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.878537 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:28Z","lastTransitionTime":"2026-01-31T03:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.982906 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.982993 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.983015 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.983048 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:28 crc kubenswrapper[4827]: I0131 03:48:28.983069 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:28Z","lastTransitionTime":"2026-01-31T03:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.086602 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.086657 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.086675 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.086701 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.086719 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:29Z","lastTransitionTime":"2026-01-31T03:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.109448 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:29 crc kubenswrapper[4827]: E0131 03:48:29.109611 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.120590 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 10:51:27.969562295 +0000 UTC Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.190057 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.190139 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.190165 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.190201 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.190225 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:29Z","lastTransitionTime":"2026-01-31T03:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.293391 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.293453 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.293471 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.293498 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.293522 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:29Z","lastTransitionTime":"2026-01-31T03:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.396428 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.396472 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.396486 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.396502 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.396513 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:29Z","lastTransitionTime":"2026-01-31T03:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.499587 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.499635 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.499647 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.499664 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.499675 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:29Z","lastTransitionTime":"2026-01-31T03:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.602181 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.602224 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.602237 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.602254 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.602265 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:29Z","lastTransitionTime":"2026-01-31T03:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.705125 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.705170 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.705181 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.705200 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.705214 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:29Z","lastTransitionTime":"2026-01-31T03:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.807397 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.807456 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.807472 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.807497 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.807513 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:29Z","lastTransitionTime":"2026-01-31T03:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.909914 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.909972 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.909989 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.910012 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:29 crc kubenswrapper[4827]: I0131 03:48:29.910031 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:29Z","lastTransitionTime":"2026-01-31T03:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.012939 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.013008 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.013025 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.013048 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.013066 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:30Z","lastTransitionTime":"2026-01-31T03:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.110011 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.110147 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:30 crc kubenswrapper[4827]: E0131 03:48:30.110218 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.110360 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:30 crc kubenswrapper[4827]: E0131 03:48:30.110391 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:30 crc kubenswrapper[4827]: E0131 03:48:30.110623 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.115726 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.115775 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.115793 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.115820 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.115839 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:30Z","lastTransitionTime":"2026-01-31T03:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.120814 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 19:27:09.306761729 +0000 UTC Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.219231 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.219307 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.219326 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.219359 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.219379 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:30Z","lastTransitionTime":"2026-01-31T03:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.322629 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.322709 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.322733 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.322769 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.322793 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:30Z","lastTransitionTime":"2026-01-31T03:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.426079 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.426130 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.426144 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.426162 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.426175 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:30Z","lastTransitionTime":"2026-01-31T03:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.529440 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.529512 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.529536 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.529563 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.529582 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:30Z","lastTransitionTime":"2026-01-31T03:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.633269 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.633332 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.633346 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.633369 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.633389 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:30Z","lastTransitionTime":"2026-01-31T03:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.735518 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.735593 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.735612 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.735639 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.735663 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:30Z","lastTransitionTime":"2026-01-31T03:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.838682 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.838739 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.838755 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.838778 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.838795 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:30Z","lastTransitionTime":"2026-01-31T03:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.942368 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.942436 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.942453 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.942490 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:30 crc kubenswrapper[4827]: I0131 03:48:30.942509 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:30Z","lastTransitionTime":"2026-01-31T03:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.046138 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.046235 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.046258 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.046283 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.046303 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:31Z","lastTransitionTime":"2026-01-31T03:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.109235 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:31 crc kubenswrapper[4827]: E0131 03:48:31.109451 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.121852 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 04:31:57.274511481 +0000 UTC Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.150110 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.150183 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.150200 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.150235 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.150255 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:31Z","lastTransitionTime":"2026-01-31T03:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.253847 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.253945 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.253963 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.254025 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.254044 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:31Z","lastTransitionTime":"2026-01-31T03:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.356822 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.356928 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.356951 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.356980 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.356999 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:31Z","lastTransitionTime":"2026-01-31T03:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.460719 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.460804 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.460838 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.460921 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.460969 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:31Z","lastTransitionTime":"2026-01-31T03:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.564430 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.564505 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.564525 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.564552 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.564570 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:31Z","lastTransitionTime":"2026-01-31T03:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.668415 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.668528 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.668547 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.668628 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.668713 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:31Z","lastTransitionTime":"2026-01-31T03:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.771473 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.771531 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.771549 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.771572 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.771591 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:31Z","lastTransitionTime":"2026-01-31T03:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.874732 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.874824 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.874852 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.874923 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.874956 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:31Z","lastTransitionTime":"2026-01-31T03:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.979287 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.979389 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.979416 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.979458 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:31 crc kubenswrapper[4827]: I0131 03:48:31.979485 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:31Z","lastTransitionTime":"2026-01-31T03:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.083052 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.083176 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.083202 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.083235 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.083256 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.109098 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.109099 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:32 crc kubenswrapper[4827]: E0131 03:48:32.109277 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.109336 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:32 crc kubenswrapper[4827]: E0131 03:48:32.109607 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:32 crc kubenswrapper[4827]: E0131 03:48:32.109694 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.122345 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 20:19:30.242606658 +0000 UTC Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.186056 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.186129 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.186147 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.186172 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.186193 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.290409 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.290526 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.290557 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.290601 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.290626 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.394641 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.394725 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.394745 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.394774 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.394797 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.498112 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.498195 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.498211 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.498229 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.498241 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.603106 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.603196 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.603224 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.603258 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.603284 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.706752 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.706819 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.706844 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.706872 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.706919 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.810318 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.810419 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.810439 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.810478 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.810504 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.913451 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.913501 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.913511 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.913525 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4827]: I0131 03:48:32.913538 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.017273 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.017346 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.017373 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.017410 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.017441 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:33Z","lastTransitionTime":"2026-01-31T03:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.109212 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:33 crc kubenswrapper[4827]: E0131 03:48:33.109432 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.120968 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.121050 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.121081 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.121121 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.121150 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:33Z","lastTransitionTime":"2026-01-31T03:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.123177 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 13:35:51.253498367 +0000 UTC Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.224386 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.224456 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.224478 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.224505 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.224524 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:33Z","lastTransitionTime":"2026-01-31T03:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.328162 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.328219 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.328238 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.328265 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.328284 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:33Z","lastTransitionTime":"2026-01-31T03:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.432182 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.432258 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.432285 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.432320 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.432344 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:33Z","lastTransitionTime":"2026-01-31T03:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.535874 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.535986 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.536007 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.536035 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.536053 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:33Z","lastTransitionTime":"2026-01-31T03:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.639256 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.639331 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.639349 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.639375 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.639396 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:33Z","lastTransitionTime":"2026-01-31T03:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.742190 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.742245 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.742262 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.742287 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.742307 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:33Z","lastTransitionTime":"2026-01-31T03:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.845645 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.845734 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.845763 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.845801 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.845826 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:33Z","lastTransitionTime":"2026-01-31T03:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.951240 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.951747 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.951955 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.952123 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:33 crc kubenswrapper[4827]: I0131 03:48:33.952288 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:33Z","lastTransitionTime":"2026-01-31T03:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.056143 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.057008 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.057213 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.057412 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.057558 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:34Z","lastTransitionTime":"2026-01-31T03:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.109689 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:34 crc kubenswrapper[4827]: E0131 03:48:34.110041 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.109689 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:34 crc kubenswrapper[4827]: E0131 03:48:34.110525 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.111158 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:34 crc kubenswrapper[4827]: E0131 03:48:34.111468 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.123948 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 05:02:49.818214803 +0000 UTC Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.161082 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.161161 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.161184 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.161246 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.161266 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:34Z","lastTransitionTime":"2026-01-31T03:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.266122 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.266182 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.266196 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.266218 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.266234 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:34Z","lastTransitionTime":"2026-01-31T03:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.369517 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.369589 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.369608 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.369639 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.369660 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:34Z","lastTransitionTime":"2026-01-31T03:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.473536 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.473625 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.473654 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.473691 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.473721 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:34Z","lastTransitionTime":"2026-01-31T03:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.577529 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.577614 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.577642 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.577682 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.577708 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:34Z","lastTransitionTime":"2026-01-31T03:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.681321 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.681389 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.681413 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.681449 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.681472 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:34Z","lastTransitionTime":"2026-01-31T03:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.785125 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.785193 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.785213 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.785240 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.785258 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:34Z","lastTransitionTime":"2026-01-31T03:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.888101 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.888203 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.888221 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.888247 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.888265 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:34Z","lastTransitionTime":"2026-01-31T03:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.991608 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.991684 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.991707 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.991736 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:34 crc kubenswrapper[4827]: I0131 03:48:34.991754 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:34Z","lastTransitionTime":"2026-01-31T03:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.095439 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.095524 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.095549 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.095647 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.095673 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:35Z","lastTransitionTime":"2026-01-31T03:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.109202 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:35 crc kubenswrapper[4827]: E0131 03:48:35.110145 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.110577 4827 scope.go:117] "RemoveContainer" containerID="c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4" Jan 31 03:48:35 crc kubenswrapper[4827]: E0131 03:48:35.110926 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.124782 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 05:21:03.3460648 +0000 UTC Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.199260 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.199339 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.199361 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.199388 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.199408 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:35Z","lastTransitionTime":"2026-01-31T03:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.302733 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.302817 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.302842 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.302876 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.302942 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:35Z","lastTransitionTime":"2026-01-31T03:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.407320 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.407423 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.407443 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.407467 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.407486 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:35Z","lastTransitionTime":"2026-01-31T03:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.511790 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.511857 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.511875 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.511933 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.511952 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:35Z","lastTransitionTime":"2026-01-31T03:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.615554 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.615620 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.615638 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.615662 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.615679 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:35Z","lastTransitionTime":"2026-01-31T03:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.718578 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.718650 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.718673 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.718704 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.718727 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:35Z","lastTransitionTime":"2026-01-31T03:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.821417 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.821446 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.821458 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.821471 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.821481 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:35Z","lastTransitionTime":"2026-01-31T03:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.924737 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.924835 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.924855 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.924927 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:35 crc kubenswrapper[4827]: I0131 03:48:35.924951 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:35Z","lastTransitionTime":"2026-01-31T03:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.028654 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.028734 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.028756 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.028795 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.028815 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:36Z","lastTransitionTime":"2026-01-31T03:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.109049 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.109196 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:36 crc kubenswrapper[4827]: E0131 03:48:36.109257 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.109264 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:36 crc kubenswrapper[4827]: E0131 03:48:36.109405 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:36 crc kubenswrapper[4827]: E0131 03:48:36.109591 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.125833 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:55:24.727989698 +0000 UTC Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.133050 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.133111 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.133131 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.133154 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.133176 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:36Z","lastTransitionTime":"2026-01-31T03:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.236512 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.236577 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.236596 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.236625 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.236648 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:36Z","lastTransitionTime":"2026-01-31T03:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.339821 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.339871 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.339901 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.339919 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.339933 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:36Z","lastTransitionTime":"2026-01-31T03:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.442685 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.442750 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.442769 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.442797 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.442816 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:36Z","lastTransitionTime":"2026-01-31T03:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.496558 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.496761 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.496780 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.496806 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.496828 4827 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:36Z","lastTransitionTime":"2026-01-31T03:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.578834 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-65c7t"] Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.579529 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-65c7t" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.585171 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.585486 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.585840 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.586711 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.679767 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14f6d36f-cf1d-4051-8358-5e265fe1d258-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-65c7t\" (UID: \"14f6d36f-cf1d-4051-8358-5e265fe1d258\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-65c7t" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.679832 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/14f6d36f-cf1d-4051-8358-5e265fe1d258-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-65c7t\" (UID: \"14f6d36f-cf1d-4051-8358-5e265fe1d258\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-65c7t" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.679920 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/14f6d36f-cf1d-4051-8358-5e265fe1d258-service-ca\") pod \"cluster-version-operator-5c965bbfc6-65c7t\" (UID: \"14f6d36f-cf1d-4051-8358-5e265fe1d258\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-65c7t" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.679958 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/14f6d36f-cf1d-4051-8358-5e265fe1d258-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-65c7t\" (UID: \"14f6d36f-cf1d-4051-8358-5e265fe1d258\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-65c7t" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.679988 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14f6d36f-cf1d-4051-8358-5e265fe1d258-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-65c7t\" (UID: \"14f6d36f-cf1d-4051-8358-5e265fe1d258\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-65c7t" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.696636 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-cl9c5" podStartSLOduration=88.696613667 podStartE2EDuration="1m28.696613667s" podCreationTimestamp="2026-01-31 03:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:48:36.673236595 +0000 UTC m=+109.360317074" watchObservedRunningTime="2026-01-31 03:48:36.696613667 +0000 UTC m=+109.383694126" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.696785 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-q9q8q" podStartSLOduration=88.696780732 podStartE2EDuration="1m28.696780732s" podCreationTimestamp="2026-01-31 03:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:48:36.695813439 +0000 UTC m=+109.382893908" watchObservedRunningTime="2026-01-31 03:48:36.696780732 +0000 UTC m=+109.383861191" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.722926 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-gjc5t" podStartSLOduration=87.722872325 podStartE2EDuration="1m27.722872325s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:48:36.721937553 +0000 UTC m=+109.409018052" watchObservedRunningTime="2026-01-31 03:48:36.722872325 +0000 UTC m=+109.409952814" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.771715 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=14.771694086 podStartE2EDuration="14.771694086s" podCreationTimestamp="2026-01-31 03:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:48:36.77122709 +0000 UTC m=+109.458307619" watchObservedRunningTime="2026-01-31 03:48:36.771694086 +0000 UTC m=+109.458774555" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.780668 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14f6d36f-cf1d-4051-8358-5e265fe1d258-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-65c7t\" (UID: \"14f6d36f-cf1d-4051-8358-5e265fe1d258\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-65c7t" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.780737 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/14f6d36f-cf1d-4051-8358-5e265fe1d258-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-65c7t\" (UID: \"14f6d36f-cf1d-4051-8358-5e265fe1d258\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-65c7t" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.780815 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/14f6d36f-cf1d-4051-8358-5e265fe1d258-service-ca\") pod \"cluster-version-operator-5c965bbfc6-65c7t\" (UID: \"14f6d36f-cf1d-4051-8358-5e265fe1d258\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-65c7t" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.780849 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/14f6d36f-cf1d-4051-8358-5e265fe1d258-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-65c7t\" (UID: \"14f6d36f-cf1d-4051-8358-5e265fe1d258\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-65c7t" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.780932 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14f6d36f-cf1d-4051-8358-5e265fe1d258-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-65c7t\" (UID: \"14f6d36f-cf1d-4051-8358-5e265fe1d258\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-65c7t" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.781406 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/14f6d36f-cf1d-4051-8358-5e265fe1d258-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-65c7t\" (UID: \"14f6d36f-cf1d-4051-8358-5e265fe1d258\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-65c7t" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.781723 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/14f6d36f-cf1d-4051-8358-5e265fe1d258-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-65c7t\" (UID: \"14f6d36f-cf1d-4051-8358-5e265fe1d258\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-65c7t" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.782834 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/14f6d36f-cf1d-4051-8358-5e265fe1d258-service-ca\") pod \"cluster-version-operator-5c965bbfc6-65c7t\" (UID: \"14f6d36f-cf1d-4051-8358-5e265fe1d258\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-65c7t" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.789421 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14f6d36f-cf1d-4051-8358-5e265fe1d258-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-65c7t\" (UID: \"14f6d36f-cf1d-4051-8358-5e265fe1d258\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-65c7t" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.798374 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.798355778 podStartE2EDuration="1m28.798355778s" podCreationTimestamp="2026-01-31 03:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:48:36.797909273 +0000 UTC m=+109.484989742" watchObservedRunningTime="2026-01-31 03:48:36.798355778 +0000 UTC m=+109.485436237" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.803856 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14f6d36f-cf1d-4051-8358-5e265fe1d258-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-65c7t\" (UID: \"14f6d36f-cf1d-4051-8358-5e265fe1d258\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-65c7t" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.830077 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=25.830052797 podStartE2EDuration="25.830052797s" podCreationTimestamp="2026-01-31 03:48:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:48:36.829185289 +0000 UTC m=+109.516265798" watchObservedRunningTime="2026-01-31 03:48:36.830052797 +0000 UTC m=+109.517133266" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.830557 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=58.830549224 podStartE2EDuration="58.830549224s" podCreationTimestamp="2026-01-31 03:47:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:48:36.814742116 +0000 UTC m=+109.501822565" watchObservedRunningTime="2026-01-31 03:48:36.830549224 +0000 UTC m=+109.517629683" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.849608 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podStartSLOduration=88.84958955 podStartE2EDuration="1m28.84958955s" podCreationTimestamp="2026-01-31 03:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:48:36.849027202 +0000 UTC m=+109.536107721" watchObservedRunningTime="2026-01-31 03:48:36.84958955 +0000 UTC m=+109.536670009" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.898492 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-w7v8l" podStartSLOduration=88.898471745 podStartE2EDuration="1m28.898471745s" podCreationTimestamp="2026-01-31 03:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:48:36.898214946 +0000 UTC m=+109.585295485" watchObservedRunningTime="2026-01-31 03:48:36.898471745 +0000 UTC m=+109.585552204" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.907266 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-65c7t" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.916962 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l5njv" podStartSLOduration=87.916939182 podStartE2EDuration="1m27.916939182s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:48:36.916670523 +0000 UTC m=+109.603750982" watchObservedRunningTime="2026-01-31 03:48:36.916939182 +0000 UTC m=+109.604019641" Jan 31 03:48:36 crc kubenswrapper[4827]: I0131 03:48:36.975807 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=83.975778519 podStartE2EDuration="1m23.975778519s" podCreationTimestamp="2026-01-31 03:47:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:48:36.971855788 +0000 UTC m=+109.658936247" watchObservedRunningTime="2026-01-31 03:48:36.975778519 +0000 UTC m=+109.662859018" Jan 31 03:48:37 crc kubenswrapper[4827]: I0131 03:48:37.109017 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:37 crc kubenswrapper[4827]: E0131 03:48:37.110063 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:37 crc kubenswrapper[4827]: I0131 03:48:37.127082 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 20:57:53.20654419 +0000 UTC Jan 31 03:48:37 crc kubenswrapper[4827]: I0131 03:48:37.127178 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 31 03:48:37 crc kubenswrapper[4827]: I0131 03:48:37.136265 4827 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 03:48:37 crc kubenswrapper[4827]: I0131 03:48:37.749641 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-65c7t" event={"ID":"14f6d36f-cf1d-4051-8358-5e265fe1d258","Type":"ContainerStarted","Data":"c60be9793bf0f50cb7dd77b821238717ce2ab6b078a6027e2e2f551963eb6240"} Jan 31 03:48:37 crc kubenswrapper[4827]: I0131 03:48:37.750548 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-65c7t" event={"ID":"14f6d36f-cf1d-4051-8358-5e265fe1d258","Type":"ContainerStarted","Data":"534af9d006b7d41fdd093db1c3d93a9abca5b8569a48baeaa92160d4c1420d08"} Jan 31 03:48:37 crc kubenswrapper[4827]: I0131 03:48:37.775475 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-65c7t" podStartSLOduration=89.775432891 podStartE2EDuration="1m29.775432891s" podCreationTimestamp="2026-01-31 03:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:48:37.773582509 +0000 UTC m=+110.460662998" watchObservedRunningTime="2026-01-31 03:48:37.775432891 +0000 UTC m=+110.462513380" Jan 31 03:48:38 crc kubenswrapper[4827]: I0131 03:48:38.109752 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:38 crc kubenswrapper[4827]: I0131 03:48:38.109752 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:38 crc kubenswrapper[4827]: I0131 03:48:38.110337 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:38 crc kubenswrapper[4827]: E0131 03:48:38.111552 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:38 crc kubenswrapper[4827]: E0131 03:48:38.112073 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:38 crc kubenswrapper[4827]: E0131 03:48:38.112295 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:39 crc kubenswrapper[4827]: I0131 03:48:39.109359 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:39 crc kubenswrapper[4827]: E0131 03:48:39.109496 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:40 crc kubenswrapper[4827]: I0131 03:48:40.109848 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:40 crc kubenswrapper[4827]: I0131 03:48:40.109940 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:40 crc kubenswrapper[4827]: I0131 03:48:40.110115 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:40 crc kubenswrapper[4827]: E0131 03:48:40.110249 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:40 crc kubenswrapper[4827]: E0131 03:48:40.110450 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:40 crc kubenswrapper[4827]: E0131 03:48:40.111060 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:41 crc kubenswrapper[4827]: I0131 03:48:41.109714 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:41 crc kubenswrapper[4827]: E0131 03:48:41.109890 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:42 crc kubenswrapper[4827]: I0131 03:48:42.109386 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:42 crc kubenswrapper[4827]: I0131 03:48:42.109435 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:42 crc kubenswrapper[4827]: E0131 03:48:42.109601 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:42 crc kubenswrapper[4827]: I0131 03:48:42.109666 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:42 crc kubenswrapper[4827]: E0131 03:48:42.109832 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:42 crc kubenswrapper[4827]: E0131 03:48:42.109990 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:43 crc kubenswrapper[4827]: I0131 03:48:43.110118 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:43 crc kubenswrapper[4827]: E0131 03:48:43.110276 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:43 crc kubenswrapper[4827]: I0131 03:48:43.777582 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q9q8q_a696063c-4553-4032-8038-9900f09d4031/kube-multus/1.log" Jan 31 03:48:43 crc kubenswrapper[4827]: I0131 03:48:43.778556 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q9q8q_a696063c-4553-4032-8038-9900f09d4031/kube-multus/0.log" Jan 31 03:48:43 crc kubenswrapper[4827]: I0131 03:48:43.778655 4827 generic.go:334] "Generic (PLEG): container finished" podID="a696063c-4553-4032-8038-9900f09d4031" containerID="b6b5307b128e69f814b56eb826859eb4b02d6645d1665c1f5d205492590135ef" exitCode=1 Jan 31 03:48:43 crc kubenswrapper[4827]: I0131 03:48:43.778705 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q9q8q" event={"ID":"a696063c-4553-4032-8038-9900f09d4031","Type":"ContainerDied","Data":"b6b5307b128e69f814b56eb826859eb4b02d6645d1665c1f5d205492590135ef"} Jan 31 03:48:43 crc kubenswrapper[4827]: I0131 03:48:43.778770 4827 scope.go:117] "RemoveContainer" containerID="3b899def12de10c70999d453a62d09ed0eec7e6f059a55a7e3f8e4b3ef248205" Jan 31 03:48:43 crc kubenswrapper[4827]: I0131 03:48:43.779362 4827 scope.go:117] "RemoveContainer" containerID="b6b5307b128e69f814b56eb826859eb4b02d6645d1665c1f5d205492590135ef" Jan 31 03:48:43 crc kubenswrapper[4827]: E0131 03:48:43.779608 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-q9q8q_openshift-multus(a696063c-4553-4032-8038-9900f09d4031)\"" pod="openshift-multus/multus-q9q8q" podUID="a696063c-4553-4032-8038-9900f09d4031" Jan 31 03:48:44 crc kubenswrapper[4827]: I0131 03:48:44.109371 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:44 crc kubenswrapper[4827]: I0131 03:48:44.109410 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:44 crc kubenswrapper[4827]: I0131 03:48:44.109372 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:44 crc kubenswrapper[4827]: E0131 03:48:44.109536 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:44 crc kubenswrapper[4827]: E0131 03:48:44.109787 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:44 crc kubenswrapper[4827]: E0131 03:48:44.110135 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:44 crc kubenswrapper[4827]: I0131 03:48:44.784246 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q9q8q_a696063c-4553-4032-8038-9900f09d4031/kube-multus/1.log" Jan 31 03:48:45 crc kubenswrapper[4827]: I0131 03:48:45.109329 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:45 crc kubenswrapper[4827]: E0131 03:48:45.109547 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:46 crc kubenswrapper[4827]: I0131 03:48:46.109721 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:46 crc kubenswrapper[4827]: E0131 03:48:46.109863 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:46 crc kubenswrapper[4827]: I0131 03:48:46.109949 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:46 crc kubenswrapper[4827]: I0131 03:48:46.110038 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:46 crc kubenswrapper[4827]: E0131 03:48:46.110281 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:46 crc kubenswrapper[4827]: E0131 03:48:46.110499 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:47 crc kubenswrapper[4827]: I0131 03:48:47.109215 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:47 crc kubenswrapper[4827]: E0131 03:48:47.109560 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:47 crc kubenswrapper[4827]: I0131 03:48:47.110638 4827 scope.go:117] "RemoveContainer" containerID="c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4" Jan 31 03:48:47 crc kubenswrapper[4827]: E0131 03:48:47.110914 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hj2zw_openshift-ovn-kubernetes(da9e7773-a24b-4e8d-b479-97e2594db0d4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" Jan 31 03:48:48 crc kubenswrapper[4827]: I0131 03:48:48.109447 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:48 crc kubenswrapper[4827]: I0131 03:48:48.109579 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:48 crc kubenswrapper[4827]: E0131 03:48:48.109623 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:48 crc kubenswrapper[4827]: E0131 03:48:48.112996 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:48 crc kubenswrapper[4827]: I0131 03:48:48.113082 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:48 crc kubenswrapper[4827]: E0131 03:48:48.113174 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:48 crc kubenswrapper[4827]: E0131 03:48:48.138922 4827 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 31 03:48:48 crc kubenswrapper[4827]: E0131 03:48:48.217908 4827 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 03:48:49 crc kubenswrapper[4827]: I0131 03:48:49.109307 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:49 crc kubenswrapper[4827]: E0131 03:48:49.109493 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:50 crc kubenswrapper[4827]: I0131 03:48:50.109805 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:50 crc kubenswrapper[4827]: I0131 03:48:50.109923 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:50 crc kubenswrapper[4827]: I0131 03:48:50.109805 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:50 crc kubenswrapper[4827]: E0131 03:48:50.110129 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:50 crc kubenswrapper[4827]: E0131 03:48:50.110297 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:50 crc kubenswrapper[4827]: E0131 03:48:50.110534 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:51 crc kubenswrapper[4827]: I0131 03:48:51.109487 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:51 crc kubenswrapper[4827]: E0131 03:48:51.109996 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:52 crc kubenswrapper[4827]: I0131 03:48:52.109953 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:52 crc kubenswrapper[4827]: I0131 03:48:52.109978 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:52 crc kubenswrapper[4827]: I0131 03:48:52.110031 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:52 crc kubenswrapper[4827]: E0131 03:48:52.110616 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:52 crc kubenswrapper[4827]: E0131 03:48:52.110705 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:52 crc kubenswrapper[4827]: E0131 03:48:52.111061 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:53 crc kubenswrapper[4827]: I0131 03:48:53.109770 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:53 crc kubenswrapper[4827]: E0131 03:48:53.110052 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:53 crc kubenswrapper[4827]: E0131 03:48:53.219049 4827 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 03:48:54 crc kubenswrapper[4827]: I0131 03:48:54.110079 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:54 crc kubenswrapper[4827]: I0131 03:48:54.110190 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:54 crc kubenswrapper[4827]: E0131 03:48:54.110218 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:54 crc kubenswrapper[4827]: I0131 03:48:54.110231 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:54 crc kubenswrapper[4827]: E0131 03:48:54.110378 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:54 crc kubenswrapper[4827]: E0131 03:48:54.110440 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:55 crc kubenswrapper[4827]: I0131 03:48:55.109215 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:55 crc kubenswrapper[4827]: E0131 03:48:55.109397 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:56 crc kubenswrapper[4827]: I0131 03:48:56.109714 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:56 crc kubenswrapper[4827]: I0131 03:48:56.109956 4827 scope.go:117] "RemoveContainer" containerID="b6b5307b128e69f814b56eb826859eb4b02d6645d1665c1f5d205492590135ef" Jan 31 03:48:56 crc kubenswrapper[4827]: I0131 03:48:56.109817 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:56 crc kubenswrapper[4827]: I0131 03:48:56.109875 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:56 crc kubenswrapper[4827]: E0131 03:48:56.110278 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:56 crc kubenswrapper[4827]: E0131 03:48:56.110342 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:56 crc kubenswrapper[4827]: E0131 03:48:56.110400 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:56 crc kubenswrapper[4827]: I0131 03:48:56.829001 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q9q8q_a696063c-4553-4032-8038-9900f09d4031/kube-multus/1.log" Jan 31 03:48:56 crc kubenswrapper[4827]: I0131 03:48:56.829083 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q9q8q" event={"ID":"a696063c-4553-4032-8038-9900f09d4031","Type":"ContainerStarted","Data":"ece84892ef77f9e6974ebeca6ed9ded8a17232182f0b1775f9230aea9422d6c1"} Jan 31 03:48:57 crc kubenswrapper[4827]: I0131 03:48:57.109386 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:57 crc kubenswrapper[4827]: E0131 03:48:57.109543 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:58 crc kubenswrapper[4827]: I0131 03:48:58.109644 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:58 crc kubenswrapper[4827]: I0131 03:48:58.109759 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:58 crc kubenswrapper[4827]: E0131 03:48:58.112229 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:58 crc kubenswrapper[4827]: I0131 03:48:58.112288 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:48:58 crc kubenswrapper[4827]: E0131 03:48:58.112476 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:58 crc kubenswrapper[4827]: E0131 03:48:58.112637 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:48:58 crc kubenswrapper[4827]: E0131 03:48:58.220330 4827 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 03:48:59 crc kubenswrapper[4827]: I0131 03:48:59.109695 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:59 crc kubenswrapper[4827]: E0131 03:48:59.109968 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:00 crc kubenswrapper[4827]: I0131 03:49:00.109129 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:00 crc kubenswrapper[4827]: I0131 03:49:00.109262 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:00 crc kubenswrapper[4827]: I0131 03:49:00.109322 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:49:00 crc kubenswrapper[4827]: E0131 03:49:00.109924 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:00 crc kubenswrapper[4827]: I0131 03:49:00.110392 4827 scope.go:117] "RemoveContainer" containerID="c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4" Jan 31 03:49:00 crc kubenswrapper[4827]: E0131 03:49:00.110767 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:49:00 crc kubenswrapper[4827]: E0131 03:49:00.112564 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:00 crc kubenswrapper[4827]: I0131 03:49:00.844279 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hj2zw_da9e7773-a24b-4e8d-b479-97e2594db0d4/ovnkube-controller/3.log" Jan 31 03:49:00 crc kubenswrapper[4827]: I0131 03:49:00.847083 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerStarted","Data":"cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c"} Jan 31 03:49:00 crc kubenswrapper[4827]: I0131 03:49:00.847589 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:49:00 crc kubenswrapper[4827]: I0131 03:49:00.881163 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" podStartSLOduration=111.881145202 podStartE2EDuration="1m51.881145202s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:00.876650952 +0000 UTC m=+133.563731421" watchObservedRunningTime="2026-01-31 03:49:00.881145202 +0000 UTC m=+133.568225681" Jan 31 03:49:01 crc kubenswrapper[4827]: I0131 03:49:01.038304 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2shng"] Jan 31 03:49:01 crc kubenswrapper[4827]: I0131 03:49:01.038418 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:49:01 crc kubenswrapper[4827]: E0131 03:49:01.038517 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:49:01 crc kubenswrapper[4827]: I0131 03:49:01.109602 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:01 crc kubenswrapper[4827]: E0131 03:49:01.109776 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:02 crc kubenswrapper[4827]: I0131 03:49:02.108976 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:02 crc kubenswrapper[4827]: E0131 03:49:02.109157 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:02 crc kubenswrapper[4827]: I0131 03:49:02.108981 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:02 crc kubenswrapper[4827]: E0131 03:49:02.109412 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:03 crc kubenswrapper[4827]: I0131 03:49:03.109311 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:49:03 crc kubenswrapper[4827]: I0131 03:49:03.109345 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:03 crc kubenswrapper[4827]: E0131 03:49:03.109508 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2shng" podUID="cf80ec31-1f83-4ed6-84e3-055cf9c88bff" Jan 31 03:49:03 crc kubenswrapper[4827]: E0131 03:49:03.109686 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:04 crc kubenswrapper[4827]: I0131 03:49:04.109050 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:04 crc kubenswrapper[4827]: I0131 03:49:04.109064 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:04 crc kubenswrapper[4827]: I0131 03:49:04.112949 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 03:49:04 crc kubenswrapper[4827]: I0131 03:49:04.113408 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 03:49:04 crc kubenswrapper[4827]: I0131 03:49:04.114138 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 03:49:04 crc kubenswrapper[4827]: I0131 03:49:04.114406 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 03:49:05 crc kubenswrapper[4827]: I0131 03:49:05.109855 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:05 crc kubenswrapper[4827]: I0131 03:49:05.110024 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:49:05 crc kubenswrapper[4827]: I0131 03:49:05.114009 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 03:49:05 crc kubenswrapper[4827]: I0131 03:49:05.115408 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.148773 4827 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.205634 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-h2fxz"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.209088 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-x2j9j"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.213036 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-h2fxz" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.221637 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.223487 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.224035 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.225135 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.225316 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-fm67q"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.225296 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.225952 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fm67q" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.229916 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-z6phb"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.230706 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4fq"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.231129 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s4zhc"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.231655 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s4zhc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.232357 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-sgzkm"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.232394 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-z6phb" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.232409 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4fq" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.233093 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-sgzkm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.234596 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.236473 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3544285-7727-4eac-b8ed-2e00b26823c7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-s4zhc\" (UID: \"b3544285-7727-4eac-b8ed-2e00b26823c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s4zhc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.236526 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9108c46-95f6-4d7e-879e-a4354473f51f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4fq\" (UID: \"e9108c46-95f6-4d7e-879e-a4354473f51f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4fq" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.236564 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9zcb\" (UniqueName: \"kubernetes.io/projected/b3544285-7727-4eac-b8ed-2e00b26823c7-kube-api-access-k9zcb\") pod \"openshift-apiserver-operator-796bbdcf4f-s4zhc\" (UID: \"b3544285-7727-4eac-b8ed-2e00b26823c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s4zhc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.236588 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-etcd-client\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.236610 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-audit-dir\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.236631 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/654e373c-b142-475e-8ab8-7644f9b0d73c-serving-cert\") pod \"authentication-operator-69f744f599-z6phb\" (UID: \"654e373c-b142-475e-8ab8-7644f9b0d73c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z6phb" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.236655 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-config\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.236675 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/899b03ec-0d91-4793-a5a2-d3aca48e5309-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-h2fxz\" (UID: \"899b03ec-0d91-4793-a5a2-d3aca48e5309\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h2fxz" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.236697 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/654e373c-b142-475e-8ab8-7644f9b0d73c-config\") pod \"authentication-operator-69f744f599-z6phb\" (UID: \"654e373c-b142-475e-8ab8-7644f9b0d73c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z6phb" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.236814 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d25vn\" (UniqueName: \"kubernetes.io/projected/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-kube-api-access-d25vn\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.236841 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05a0bcfb-cbdb-4f6a-bd21-89c876dc7635-config\") pod \"machine-approver-56656f9798-fm67q\" (UID: \"05a0bcfb-cbdb-4f6a-bd21-89c876dc7635\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fm67q" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.236862 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/05a0bcfb-cbdb-4f6a-bd21-89c876dc7635-auth-proxy-config\") pod \"machine-approver-56656f9798-fm67q\" (UID: \"05a0bcfb-cbdb-4f6a-bd21-89c876dc7635\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fm67q" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.236898 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-audit\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.236924 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/05a0bcfb-cbdb-4f6a-bd21-89c876dc7635-machine-approver-tls\") pod \"machine-approver-56656f9798-fm67q\" (UID: \"05a0bcfb-cbdb-4f6a-bd21-89c876dc7635\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fm67q" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.236946 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-serving-cert\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.236966 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-trusted-ca-bundle\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.236990 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgjfp\" (UniqueName: \"kubernetes.io/projected/6e659a59-19ab-4c91-98ec-db3042ac1d4b-kube-api-access-qgjfp\") pod \"downloads-7954f5f757-sgzkm\" (UID: \"6e659a59-19ab-4c91-98ec-db3042ac1d4b\") " pod="openshift-console/downloads-7954f5f757-sgzkm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.237014 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/654e373c-b142-475e-8ab8-7644f9b0d73c-service-ca-bundle\") pod \"authentication-operator-69f744f599-z6phb\" (UID: \"654e373c-b142-475e-8ab8-7644f9b0d73c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z6phb" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.237037 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/893053b3-df21-4683-a51a-bf12b3bed27d-client-ca\") pod \"route-controller-manager-6576b87f9c-58p4m\" (UID: \"893053b3-df21-4683-a51a-bf12b3bed27d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.237059 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-image-import-ca\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.237082 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/893053b3-df21-4683-a51a-bf12b3bed27d-config\") pod \"route-controller-manager-6576b87f9c-58p4m\" (UID: \"893053b3-df21-4683-a51a-bf12b3bed27d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.237103 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgk44\" (UniqueName: \"kubernetes.io/projected/05a0bcfb-cbdb-4f6a-bd21-89c876dc7635-kube-api-access-zgk44\") pod \"machine-approver-56656f9798-fm67q\" (UID: \"05a0bcfb-cbdb-4f6a-bd21-89c876dc7635\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fm67q" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.237125 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn4jn\" (UniqueName: \"kubernetes.io/projected/899b03ec-0d91-4793-a5a2-d3aca48e5309-kube-api-access-hn4jn\") pod \"machine-api-operator-5694c8668f-h2fxz\" (UID: \"899b03ec-0d91-4793-a5a2-d3aca48e5309\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h2fxz" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.237147 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/893053b3-df21-4683-a51a-bf12b3bed27d-serving-cert\") pod \"route-controller-manager-6576b87f9c-58p4m\" (UID: \"893053b3-df21-4683-a51a-bf12b3bed27d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.237169 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-encryption-config\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.237189 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-node-pullsecrets\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.237212 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3544285-7727-4eac-b8ed-2e00b26823c7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-s4zhc\" (UID: \"b3544285-7727-4eac-b8ed-2e00b26823c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s4zhc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.237235 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/899b03ec-0d91-4793-a5a2-d3aca48e5309-images\") pod \"machine-api-operator-5694c8668f-h2fxz\" (UID: \"899b03ec-0d91-4793-a5a2-d3aca48e5309\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h2fxz" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.237260 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g88f5\" (UniqueName: \"kubernetes.io/projected/654e373c-b142-475e-8ab8-7644f9b0d73c-kube-api-access-g88f5\") pod \"authentication-operator-69f744f599-z6phb\" (UID: \"654e373c-b142-475e-8ab8-7644f9b0d73c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z6phb" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.237279 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9108c46-95f6-4d7e-879e-a4354473f51f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4fq\" (UID: \"e9108c46-95f6-4d7e-879e-a4354473f51f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4fq" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.237311 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9868\" (UniqueName: \"kubernetes.io/projected/e9108c46-95f6-4d7e-879e-a4354473f51f-kube-api-access-z9868\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4fq\" (UID: \"e9108c46-95f6-4d7e-879e-a4354473f51f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4fq" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.237333 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2gdp\" (UniqueName: \"kubernetes.io/projected/893053b3-df21-4683-a51a-bf12b3bed27d-kube-api-access-z2gdp\") pod \"route-controller-manager-6576b87f9c-58p4m\" (UID: \"893053b3-df21-4683-a51a-bf12b3bed27d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.237355 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-etcd-serving-ca\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.237376 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/899b03ec-0d91-4793-a5a2-d3aca48e5309-config\") pod \"machine-api-operator-5694c8668f-h2fxz\" (UID: \"899b03ec-0d91-4793-a5a2-d3aca48e5309\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h2fxz" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.237398 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/654e373c-b142-475e-8ab8-7644f9b0d73c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-z6phb\" (UID: \"654e373c-b142-475e-8ab8-7644f9b0d73c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z6phb" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.237834 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.237985 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.238080 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.238166 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.238271 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.238464 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.238473 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.238634 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.238670 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.238914 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.239232 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dqndk"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.240523 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqndk" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.240987 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.241318 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.241530 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.241827 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.242017 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.242162 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.242302 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.242472 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.243106 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.244658 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.245150 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.245326 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.245491 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.246350 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.246517 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.246841 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.247499 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.247685 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.247794 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.247930 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.248139 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.248230 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.248475 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.248558 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.248563 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.249432 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.249620 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.249803 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.249970 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.250187 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.250352 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.251254 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5p4mg"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.252174 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qtqj4"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.252505 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.252841 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5p4mg" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.254981 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.260051 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-57nc5"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.260554 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-57nc5" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.261521 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mcs5z"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.262201 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mcs5z" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.262442 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-q4hqs"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.262993 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.266230 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.284095 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.284514 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.284906 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.284964 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.285059 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.285238 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.285468 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.285619 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.285723 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.285750 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.285808 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.285869 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.285929 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.286004 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.286079 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.286168 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.286252 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.286275 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.286326 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.286406 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.286488 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.286572 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.286615 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.286669 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.286738 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lx7vs"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.287329 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.286751 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.286778 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.301651 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.304147 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.304514 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.304755 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.306448 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.307371 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.307768 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.307895 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.308368 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.308773 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.329182 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.329655 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mkhcp"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.330592 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.332094 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.332232 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.332848 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.333036 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.333125 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.333969 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.334025 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.334085 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.338017 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.338331 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.338610 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.338768 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.338848 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.339687 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.340338 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.340454 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a32abae-914d-4102-9e89-817922ff06ca-serving-cert\") pod \"openshift-config-operator-7777fb866f-dqndk\" (UID: \"4a32abae-914d-4102-9e89-817922ff06ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqndk" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.340495 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48fc8b4d-5a6b-40a8-acef-785097811718-serving-cert\") pod \"etcd-operator-b45778765-lx7vs\" (UID: \"48fc8b4d-5a6b-40a8-acef-785097811718\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.340547 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3544285-7727-4eac-b8ed-2e00b26823c7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-s4zhc\" (UID: \"b3544285-7727-4eac-b8ed-2e00b26823c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s4zhc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.340569 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9qhm\" (UniqueName: \"kubernetes.io/projected/81c458ce-ffe4-4613-bb4a-0b5d0809519b-kube-api-access-w9qhm\") pod \"console-operator-58897d9998-57nc5\" (UID: \"81c458ce-ffe4-4613-bb4a-0b5d0809519b\") " pod="openshift-console-operator/console-operator-58897d9998-57nc5" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.340592 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-trusted-ca-bundle\") pod \"console-f9d7485db-q4hqs\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.340634 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a2a52a00-75ce-4094-bab7-913d6fbab1dc-console-oauth-config\") pod \"console-f9d7485db-q4hqs\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.340654 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-console-config\") pod \"console-f9d7485db-q4hqs\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.340676 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9wqf\" (UniqueName: \"kubernetes.io/projected/c10be0b3-7f40-4f17-8206-ab6257d4b23b-kube-api-access-b9wqf\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.340703 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9108c46-95f6-4d7e-879e-a4354473f51f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4fq\" (UID: \"e9108c46-95f6-4d7e-879e-a4354473f51f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4fq" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.340726 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/68610615-718e-4fd3-a19b-9de01ef64a03-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5p4mg\" (UID: \"68610615-718e-4fd3-a19b-9de01ef64a03\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5p4mg" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.340767 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9zcb\" (UniqueName: \"kubernetes.io/projected/b3544285-7727-4eac-b8ed-2e00b26823c7-kube-api-access-k9zcb\") pod \"openshift-apiserver-operator-796bbdcf4f-s4zhc\" (UID: \"b3544285-7727-4eac-b8ed-2e00b26823c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s4zhc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.340789 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-etcd-client\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.340810 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-audit-dir\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.340835 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/899b03ec-0d91-4793-a5a2-d3aca48e5309-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-h2fxz\" (UID: \"899b03ec-0d91-4793-a5a2-d3aca48e5309\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h2fxz" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.340859 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/48fc8b4d-5a6b-40a8-acef-785097811718-etcd-ca\") pod \"etcd-operator-b45778765-lx7vs\" (UID: \"48fc8b4d-5a6b-40a8-acef-785097811718\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.340918 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/654e373c-b142-475e-8ab8-7644f9b0d73c-serving-cert\") pod \"authentication-operator-69f744f599-z6phb\" (UID: \"654e373c-b142-475e-8ab8-7644f9b0d73c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z6phb" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.340978 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-config\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.342266 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-audit-dir\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.342330 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/654e373c-b142-475e-8ab8-7644f9b0d73c-config\") pod \"authentication-operator-69f744f599-z6phb\" (UID: \"654e373c-b142-475e-8ab8-7644f9b0d73c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z6phb" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.342362 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d25vn\" (UniqueName: \"kubernetes.io/projected/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-kube-api-access-d25vn\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.342391 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.342467 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05a0bcfb-cbdb-4f6a-bd21-89c876dc7635-config\") pod \"machine-approver-56656f9798-fm67q\" (UID: \"05a0bcfb-cbdb-4f6a-bd21-89c876dc7635\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fm67q" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.342491 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlnx7\" (UniqueName: \"kubernetes.io/projected/68610615-718e-4fd3-a19b-9de01ef64a03-kube-api-access-xlnx7\") pod \"cluster-samples-operator-665b6dd947-5p4mg\" (UID: \"68610615-718e-4fd3-a19b-9de01ef64a03\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5p4mg" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.342515 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c458ce-ffe4-4613-bb4a-0b5d0809519b-config\") pod \"console-operator-58897d9998-57nc5\" (UID: \"81c458ce-ffe4-4613-bb4a-0b5d0809519b\") " pod="openshift-console-operator/console-operator-58897d9998-57nc5" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.343208 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/654e373c-b142-475e-8ab8-7644f9b0d73c-config\") pod \"authentication-operator-69f744f599-z6phb\" (UID: \"654e373c-b142-475e-8ab8-7644f9b0d73c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z6phb" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.343764 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05a0bcfb-cbdb-4f6a-bd21-89c876dc7635-config\") pod \"machine-approver-56656f9798-fm67q\" (UID: \"05a0bcfb-cbdb-4f6a-bd21-89c876dc7635\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fm67q" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.343825 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/05a0bcfb-cbdb-4f6a-bd21-89c876dc7635-auth-proxy-config\") pod \"machine-approver-56656f9798-fm67q\" (UID: \"05a0bcfb-cbdb-4f6a-bd21-89c876dc7635\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fm67q" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.343855 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj7b4\" (UniqueName: \"kubernetes.io/projected/a2a52a00-75ce-4094-bab7-913d6fbab1dc-kube-api-access-rj7b4\") pod \"console-f9d7485db-q4hqs\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.343901 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/48fc8b4d-5a6b-40a8-acef-785097811718-etcd-client\") pod \"etcd-operator-b45778765-lx7vs\" (UID: \"48fc8b4d-5a6b-40a8-acef-785097811718\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.344285 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.344392 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/05a0bcfb-cbdb-4f6a-bd21-89c876dc7635-auth-proxy-config\") pod \"machine-approver-56656f9798-fm67q\" (UID: \"05a0bcfb-cbdb-4f6a-bd21-89c876dc7635\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fm67q" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.344452 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-audit\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.345060 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-audit\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.345434 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.345813 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48fc8b4d-5a6b-40a8-acef-785097811718-config\") pod \"etcd-operator-b45778765-lx7vs\" (UID: \"48fc8b4d-5a6b-40a8-acef-785097811718\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.346082 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg2zt\" (UniqueName: \"kubernetes.io/projected/48fc8b4d-5a6b-40a8-acef-785097811718-kube-api-access-lg2zt\") pod \"etcd-operator-b45778765-lx7vs\" (UID: \"48fc8b4d-5a6b-40a8-acef-785097811718\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.347440 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81c458ce-ffe4-4613-bb4a-0b5d0809519b-serving-cert\") pod \"console-operator-58897d9998-57nc5\" (UID: \"81c458ce-ffe4-4613-bb4a-0b5d0809519b\") " pod="openshift-console-operator/console-operator-58897d9998-57nc5" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.347542 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c10be0b3-7f40-4f17-8206-ab6257d4b23b-audit-dir\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.347640 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.347830 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-trusted-ca-bundle\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.348447 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.348545 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9108c46-95f6-4d7e-879e-a4354473f51f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4fq\" (UID: \"e9108c46-95f6-4d7e-879e-a4354473f51f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4fq" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.350250 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-trusted-ca-bundle\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.350382 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/05a0bcfb-cbdb-4f6a-bd21-89c876dc7635-machine-approver-tls\") pod \"machine-approver-56656f9798-fm67q\" (UID: \"05a0bcfb-cbdb-4f6a-bd21-89c876dc7635\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fm67q" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.350490 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-serving-cert\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.350615 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-config\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.350955 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/654e373c-b142-475e-8ab8-7644f9b0d73c-serving-cert\") pod \"authentication-operator-69f744f599-z6phb\" (UID: \"654e373c-b142-475e-8ab8-7644f9b0d73c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z6phb" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.351029 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgjfp\" (UniqueName: \"kubernetes.io/projected/6e659a59-19ab-4c91-98ec-db3042ac1d4b-kube-api-access-qgjfp\") pod \"downloads-7954f5f757-sgzkm\" (UID: \"6e659a59-19ab-4c91-98ec-db3042ac1d4b\") " pod="openshift-console/downloads-7954f5f757-sgzkm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.351093 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/81c458ce-ffe4-4613-bb4a-0b5d0809519b-trusted-ca\") pod \"console-operator-58897d9998-57nc5\" (UID: \"81c458ce-ffe4-4613-bb4a-0b5d0809519b\") " pod="openshift-console-operator/console-operator-58897d9998-57nc5" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.351136 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.351359 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/654e373c-b142-475e-8ab8-7644f9b0d73c-service-ca-bundle\") pod \"authentication-operator-69f744f599-z6phb\" (UID: \"654e373c-b142-475e-8ab8-7644f9b0d73c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z6phb" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.351419 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/893053b3-df21-4683-a51a-bf12b3bed27d-client-ca\") pod \"route-controller-manager-6576b87f9c-58p4m\" (UID: \"893053b3-df21-4683-a51a-bf12b3bed27d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.352139 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/654e373c-b142-475e-8ab8-7644f9b0d73c-service-ca-bundle\") pod \"authentication-operator-69f744f599-z6phb\" (UID: \"654e373c-b142-475e-8ab8-7644f9b0d73c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z6phb" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.352232 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7bafc4cb-e5b7-4b39-9930-b885e403dfca-client-ca\") pod \"controller-manager-879f6c89f-mkhcp\" (UID: \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.352301 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-image-import-ca\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.352479 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/893053b3-df21-4683-a51a-bf12b3bed27d-client-ca\") pod \"route-controller-manager-6576b87f9c-58p4m\" (UID: \"893053b3-df21-4683-a51a-bf12b3bed27d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.352639 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/893053b3-df21-4683-a51a-bf12b3bed27d-config\") pod \"route-controller-manager-6576b87f9c-58p4m\" (UID: \"893053b3-df21-4683-a51a-bf12b3bed27d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.352725 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-audit-policies\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.352755 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.352806 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgk44\" (UniqueName: \"kubernetes.io/projected/05a0bcfb-cbdb-4f6a-bd21-89c876dc7635-kube-api-access-zgk44\") pod \"machine-approver-56656f9798-fm67q\" (UID: \"05a0bcfb-cbdb-4f6a-bd21-89c876dc7635\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fm67q" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.352834 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn4jn\" (UniqueName: \"kubernetes.io/projected/899b03ec-0d91-4793-a5a2-d3aca48e5309-kube-api-access-hn4jn\") pod \"machine-api-operator-5694c8668f-h2fxz\" (UID: \"899b03ec-0d91-4793-a5a2-d3aca48e5309\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h2fxz" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.352859 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.353404 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.353674 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-image-import-ca\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.354652 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.354915 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/48fc8b4d-5a6b-40a8-acef-785097811718-etcd-service-ca\") pod \"etcd-operator-b45778765-lx7vs\" (UID: \"48fc8b4d-5a6b-40a8-acef-785097811718\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.354946 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-service-ca\") pod \"console-f9d7485db-q4hqs\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.354972 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.355142 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3544285-7727-4eac-b8ed-2e00b26823c7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-s4zhc\" (UID: \"b3544285-7727-4eac-b8ed-2e00b26823c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s4zhc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.355636 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/05a0bcfb-cbdb-4f6a-bd21-89c876dc7635-machine-approver-tls\") pod \"machine-approver-56656f9798-fm67q\" (UID: \"05a0bcfb-cbdb-4f6a-bd21-89c876dc7635\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fm67q" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.356783 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/893053b3-df21-4683-a51a-bf12b3bed27d-serving-cert\") pod \"route-controller-manager-6576b87f9c-58p4m\" (UID: \"893053b3-df21-4683-a51a-bf12b3bed27d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.356831 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-encryption-config\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.356853 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7bafc4cb-e5b7-4b39-9930-b885e403dfca-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mkhcp\" (UID: \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.357664 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-node-pullsecrets\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.357789 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4a32abae-914d-4102-9e89-817922ff06ca-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dqndk\" (UID: \"4a32abae-914d-4102-9e89-817922ff06ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqndk" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.357812 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/755d86cd-4d90-41eb-8c62-b130143346aa-metrics-tls\") pod \"dns-operator-744455d44c-mcs5z\" (UID: \"755d86cd-4d90-41eb-8c62-b130143346aa\") " pod="openshift-dns-operator/dns-operator-744455d44c-mcs5z" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.357831 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3544285-7727-4eac-b8ed-2e00b26823c7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-s4zhc\" (UID: \"b3544285-7727-4eac-b8ed-2e00b26823c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s4zhc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.357858 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.357913 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-node-pullsecrets\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.359035 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-serving-cert\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.359676 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvfqr\" (UniqueName: \"kubernetes.io/projected/755d86cd-4d90-41eb-8c62-b130143346aa-kube-api-access-vvfqr\") pod \"dns-operator-744455d44c-mcs5z\" (UID: \"755d86cd-4d90-41eb-8c62-b130143346aa\") " pod="openshift-dns-operator/dns-operator-744455d44c-mcs5z" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.359766 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/899b03ec-0d91-4793-a5a2-d3aca48e5309-images\") pod \"machine-api-operator-5694c8668f-h2fxz\" (UID: \"899b03ec-0d91-4793-a5a2-d3aca48e5309\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h2fxz" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.359838 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-oauth-serving-cert\") pod \"console-f9d7485db-q4hqs\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.359902 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.360215 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g88f5\" (UniqueName: \"kubernetes.io/projected/654e373c-b142-475e-8ab8-7644f9b0d73c-kube-api-access-g88f5\") pod \"authentication-operator-69f744f599-z6phb\" (UID: \"654e373c-b142-475e-8ab8-7644f9b0d73c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z6phb" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.360597 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-etcd-client\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.360635 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9108c46-95f6-4d7e-879e-a4354473f51f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4fq\" (UID: \"e9108c46-95f6-4d7e-879e-a4354473f51f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4fq" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.360787 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9868\" (UniqueName: \"kubernetes.io/projected/e9108c46-95f6-4d7e-879e-a4354473f51f-kube-api-access-z9868\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4fq\" (UID: \"e9108c46-95f6-4d7e-879e-a4354473f51f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4fq" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.361008 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2gdp\" (UniqueName: \"kubernetes.io/projected/893053b3-df21-4683-a51a-bf12b3bed27d-kube-api-access-z2gdp\") pod \"route-controller-manager-6576b87f9c-58p4m\" (UID: \"893053b3-df21-4683-a51a-bf12b3bed27d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.361073 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-etcd-serving-ca\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.361150 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/899b03ec-0d91-4793-a5a2-d3aca48e5309-config\") pod \"machine-api-operator-5694c8668f-h2fxz\" (UID: \"899b03ec-0d91-4793-a5a2-d3aca48e5309\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h2fxz" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.361156 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/899b03ec-0d91-4793-a5a2-d3aca48e5309-images\") pod \"machine-api-operator-5694c8668f-h2fxz\" (UID: \"899b03ec-0d91-4793-a5a2-d3aca48e5309\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h2fxz" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.361187 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.361267 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/654e373c-b142-475e-8ab8-7644f9b0d73c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-z6phb\" (UID: \"654e373c-b142-475e-8ab8-7644f9b0d73c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z6phb" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.361306 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.361352 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bafc4cb-e5b7-4b39-9930-b885e403dfca-config\") pod \"controller-manager-879f6c89f-mkhcp\" (UID: \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.361377 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bafc4cb-e5b7-4b39-9930-b885e403dfca-serving-cert\") pod \"controller-manager-879f6c89f-mkhcp\" (UID: \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.361406 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6x8k\" (UniqueName: \"kubernetes.io/projected/7bafc4cb-e5b7-4b39-9930-b885e403dfca-kube-api-access-x6x8k\") pod \"controller-manager-879f6c89f-mkhcp\" (UID: \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.361510 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkqq9\" (UniqueName: \"kubernetes.io/projected/4a32abae-914d-4102-9e89-817922ff06ca-kube-api-access-xkqq9\") pod \"openshift-config-operator-7777fb866f-dqndk\" (UID: \"4a32abae-914d-4102-9e89-817922ff06ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqndk" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.361578 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.361662 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2a52a00-75ce-4094-bab7-913d6fbab1dc-console-serving-cert\") pod \"console-f9d7485db-q4hqs\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.361845 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/899b03ec-0d91-4793-a5a2-d3aca48e5309-config\") pod \"machine-api-operator-5694c8668f-h2fxz\" (UID: \"899b03ec-0d91-4793-a5a2-d3aca48e5309\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h2fxz" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.361867 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-etcd-serving-ca\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.361960 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-x2j9j"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.362350 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/654e373c-b142-475e-8ab8-7644f9b0d73c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-z6phb\" (UID: \"654e373c-b142-475e-8ab8-7644f9b0d73c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z6phb" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.363449 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3544285-7727-4eac-b8ed-2e00b26823c7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-s4zhc\" (UID: \"b3544285-7727-4eac-b8ed-2e00b26823c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s4zhc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.363674 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/893053b3-df21-4683-a51a-bf12b3bed27d-serving-cert\") pod \"route-controller-manager-6576b87f9c-58p4m\" (UID: \"893053b3-df21-4683-a51a-bf12b3bed27d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.364179 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.364214 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9108c46-95f6-4d7e-879e-a4354473f51f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4fq\" (UID: \"e9108c46-95f6-4d7e-879e-a4354473f51f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4fq" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.364189 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.366049 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xrt7q"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.367202 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbsqf"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.367811 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-sw4rn"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.368395 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.368750 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xrt7q" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.369168 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbsqf" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.369189 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-sw4rn" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.379126 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s7psb"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.380014 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/899b03ec-0d91-4793-a5a2-d3aca48e5309-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-h2fxz\" (UID: \"899b03ec-0d91-4793-a5a2-d3aca48e5309\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h2fxz" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.380736 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g4946"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.382928 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g4946" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.383098 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.387939 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-txptg"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.389215 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-txptg" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.394994 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lrw5m"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.396755 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.397925 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-cxgr4"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.398114 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/893053b3-df21-4683-a51a-bf12b3bed27d-config\") pod \"route-controller-manager-6576b87f9c-58p4m\" (UID: \"893053b3-df21-4683-a51a-bf12b3bed27d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.398442 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-encryption-config\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.400061 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxgr4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.411917 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8bbs"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.412956 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8bbs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.413382 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xxh4j"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.413949 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xxh4j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.415957 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mrplz"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.416574 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mrplz" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.417685 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-h2fxz"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.421717 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vtzj8"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.423709 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vtzj8" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.424033 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4rp2t"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.426422 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4rp2t" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.426709 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbd4n"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.427660 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbd4n" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.428547 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9zcb\" (UniqueName: \"kubernetes.io/projected/b3544285-7727-4eac-b8ed-2e00b26823c7-kube-api-access-k9zcb\") pod \"openshift-apiserver-operator-796bbdcf4f-s4zhc\" (UID: \"b3544285-7727-4eac-b8ed-2e00b26823c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s4zhc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.429721 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.430351 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.430704 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-k4wl6"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.431127 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-k4wl6" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.432536 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-dxfqc"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.433449 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.442312 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fdwxr"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.443451 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdh9d"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.444278 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fdwxr" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.446360 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.447822 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.448099 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdh9d" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.448467 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6qz7r"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.448601 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.449335 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-zhrf7"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.449466 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6qz7r" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.450448 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d25vn\" (UniqueName: \"kubernetes.io/projected/e0e07dc1-cd63-49ad-8a43-7a15027a1e74-kube-api-access-d25vn\") pod \"apiserver-76f77b778f-x2j9j\" (UID: \"e0e07dc1-cd63-49ad-8a43-7a15027a1e74\") " pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.450706 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5p4mg"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.450724 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.451068 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zhrf7" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.453294 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-sgzkm"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.453761 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-57nc5"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.454796 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lx7vs"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.456012 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-z6phb"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.457244 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s4zhc"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.458589 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4fq"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.459861 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qtqj4"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.460849 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mcs5z"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.461811 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbsqf"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.462697 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a32abae-914d-4102-9e89-817922ff06ca-serving-cert\") pod \"openshift-config-operator-7777fb866f-dqndk\" (UID: \"4a32abae-914d-4102-9e89-817922ff06ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqndk" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.462734 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48fc8b4d-5a6b-40a8-acef-785097811718-serving-cert\") pod \"etcd-operator-b45778765-lx7vs\" (UID: \"48fc8b4d-5a6b-40a8-acef-785097811718\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.462761 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bc4e2378-18bc-4624-acb0-a5010db62008-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.462794 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9qhm\" (UniqueName: \"kubernetes.io/projected/81c458ce-ffe4-4613-bb4a-0b5d0809519b-kube-api-access-w9qhm\") pod \"console-operator-58897d9998-57nc5\" (UID: \"81c458ce-ffe4-4613-bb4a-0b5d0809519b\") " pod="openshift-console-operator/console-operator-58897d9998-57nc5" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.462817 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-trusted-ca-bundle\") pod \"console-f9d7485db-q4hqs\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.462856 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-console-config\") pod \"console-f9d7485db-q4hqs\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.462871 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9wqf\" (UniqueName: \"kubernetes.io/projected/c10be0b3-7f40-4f17-8206-ab6257d4b23b-kube-api-access-b9wqf\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.462905 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a2a52a00-75ce-4094-bab7-913d6fbab1dc-console-oauth-config\") pod \"console-f9d7485db-q4hqs\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.462939 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/68610615-718e-4fd3-a19b-9de01ef64a03-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5p4mg\" (UID: \"68610615-718e-4fd3-a19b-9de01ef64a03\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5p4mg" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.462957 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-default-certificate\") pod \"router-default-5444994796-dxfqc\" (UID: \"85b31f82-0a7a-466c-aa0f-bffb46f2b04c\") " pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.462992 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0591695e-7fc9-4d9c-a9a2-a5f44db74caa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-cxgr4\" (UID: \"0591695e-7fc9-4d9c-a9a2-a5f44db74caa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxgr4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463011 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b13bebc0-a453-40cd-9611-43cf66b3dd53-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xxh4j\" (UID: \"b13bebc0-a453-40cd-9611-43cf66b3dd53\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xxh4j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463028 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc9tz\" (UniqueName: \"kubernetes.io/projected/bc4e2378-18bc-4624-acb0-a5010db62008-kube-api-access-zc9tz\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463048 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-metrics-certs\") pod \"router-default-5444994796-dxfqc\" (UID: \"85b31f82-0a7a-466c-aa0f-bffb46f2b04c\") " pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463069 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0591695e-7fc9-4d9c-a9a2-a5f44db74caa-proxy-tls\") pod \"machine-config-controller-84d6567774-cxgr4\" (UID: \"0591695e-7fc9-4d9c-a9a2-a5f44db74caa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxgr4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463102 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/48fc8b4d-5a6b-40a8-acef-785097811718-etcd-ca\") pod \"etcd-operator-b45778765-lx7vs\" (UID: \"48fc8b4d-5a6b-40a8-acef-785097811718\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463121 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0e71f06b-5ae9-4606-922f-eedf9f8eefa6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-sw4rn\" (UID: \"0e71f06b-5ae9-4606-922f-eedf9f8eefa6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sw4rn" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463140 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463162 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64cfc229-d23f-4303-a604-cd7be04f0bc3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bbd4n\" (UID: \"64cfc229-d23f-4303-a604-cd7be04f0bc3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbd4n" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463185 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8fe7a4ad-d825-4988-8390-c04b5a1b114c-srv-cert\") pod \"olm-operator-6b444d44fb-g4946\" (UID: \"8fe7a4ad-d825-4988-8390-c04b5a1b114c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g4946" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463217 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlnx7\" (UniqueName: \"kubernetes.io/projected/68610615-718e-4fd3-a19b-9de01ef64a03-kube-api-access-xlnx7\") pod \"cluster-samples-operator-665b6dd947-5p4mg\" (UID: \"68610615-718e-4fd3-a19b-9de01ef64a03\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5p4mg" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463238 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c458ce-ffe4-4613-bb4a-0b5d0809519b-config\") pod \"console-operator-58897d9998-57nc5\" (UID: \"81c458ce-ffe4-4613-bb4a-0b5d0809519b\") " pod="openshift-console-operator/console-operator-58897d9998-57nc5" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463262 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64cfc229-d23f-4303-a604-cd7be04f0bc3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bbd4n\" (UID: \"64cfc229-d23f-4303-a604-cd7be04f0bc3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbd4n" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463296 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj7b4\" (UniqueName: \"kubernetes.io/projected/a2a52a00-75ce-4094-bab7-913d6fbab1dc-kube-api-access-rj7b4\") pod \"console-f9d7485db-q4hqs\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463316 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/48fc8b4d-5a6b-40a8-acef-785097811718-etcd-client\") pod \"etcd-operator-b45778765-lx7vs\" (UID: \"48fc8b4d-5a6b-40a8-acef-785097811718\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463337 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-stats-auth\") pod \"router-default-5444994796-dxfqc\" (UID: \"85b31f82-0a7a-466c-aa0f-bffb46f2b04c\") " pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463362 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48fc8b4d-5a6b-40a8-acef-785097811718-config\") pod \"etcd-operator-b45778765-lx7vs\" (UID: \"48fc8b4d-5a6b-40a8-acef-785097811718\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463381 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81c458ce-ffe4-4613-bb4a-0b5d0809519b-serving-cert\") pod \"console-operator-58897d9998-57nc5\" (UID: \"81c458ce-ffe4-4613-bb4a-0b5d0809519b\") " pod="openshift-console-operator/console-operator-58897d9998-57nc5" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463401 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c10be0b3-7f40-4f17-8206-ab6257d4b23b-audit-dir\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463420 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463442 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg2zt\" (UniqueName: \"kubernetes.io/projected/48fc8b4d-5a6b-40a8-acef-785097811718-kube-api-access-lg2zt\") pod \"etcd-operator-b45778765-lx7vs\" (UID: \"48fc8b4d-5a6b-40a8-acef-785097811718\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463464 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc4e2378-18bc-4624-acb0-a5010db62008-serving-cert\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463482 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64cfc229-d23f-4303-a604-cd7be04f0bc3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bbd4n\" (UID: \"64cfc229-d23f-4303-a604-cd7be04f0bc3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbd4n" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463501 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6vdd\" (UniqueName: \"kubernetes.io/projected/0e71f06b-5ae9-4606-922f-eedf9f8eefa6-kube-api-access-h6vdd\") pod \"multus-admission-controller-857f4d67dd-sw4rn\" (UID: \"0e71f06b-5ae9-4606-922f-eedf9f8eefa6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sw4rn" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463519 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bc4e2378-18bc-4624-acb0-a5010db62008-audit-dir\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463550 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfea954e-db56-4946-a178-3376d7793b46-config\") pod \"service-ca-operator-777779d784-k4wl6\" (UID: \"cfea954e-db56-4946-a178-3376d7793b46\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k4wl6" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463568 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-service-ca-bundle\") pod \"router-default-5444994796-dxfqc\" (UID: \"85b31f82-0a7a-466c-aa0f-bffb46f2b04c\") " pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463624 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/81c458ce-ffe4-4613-bb4a-0b5d0809519b-trusted-ca\") pod \"console-operator-58897d9998-57nc5\" (UID: \"81c458ce-ffe4-4613-bb4a-0b5d0809519b\") " pod="openshift-console-operator/console-operator-58897d9998-57nc5" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463647 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjcsw\" (UniqueName: \"kubernetes.io/projected/0591695e-7fc9-4d9c-a9a2-a5f44db74caa-kube-api-access-fjcsw\") pod \"machine-config-controller-84d6567774-cxgr4\" (UID: \"0591695e-7fc9-4d9c-a9a2-a5f44db74caa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxgr4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463667 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463686 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq9h2\" (UniqueName: \"kubernetes.io/projected/cfea954e-db56-4946-a178-3376d7793b46-kube-api-access-kq9h2\") pod \"service-ca-operator-777779d784-k4wl6\" (UID: \"cfea954e-db56-4946-a178-3376d7793b46\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k4wl6" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463702 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b13bebc0-a453-40cd-9611-43cf66b3dd53-config\") pod \"kube-controller-manager-operator-78b949d7b-xxh4j\" (UID: \"b13bebc0-a453-40cd-9611-43cf66b3dd53\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xxh4j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463720 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7bafc4cb-e5b7-4b39-9930-b885e403dfca-client-ca\") pod \"controller-manager-879f6c89f-mkhcp\" (UID: \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463742 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf44f\" (UniqueName: \"kubernetes.io/projected/8fe7a4ad-d825-4988-8390-c04b5a1b114c-kube-api-access-sf44f\") pod \"olm-operator-6b444d44fb-g4946\" (UID: \"8fe7a4ad-d825-4988-8390-c04b5a1b114c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g4946" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463763 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-audit-policies\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463782 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463801 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b13bebc0-a453-40cd-9611-43cf66b3dd53-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xxh4j\" (UID: \"b13bebc0-a453-40cd-9611-43cf66b3dd53\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xxh4j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463819 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bc4e2378-18bc-4624-acb0-a5010db62008-encryption-config\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463848 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463865 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/48fc8b4d-5a6b-40a8-acef-785097811718-etcd-service-ca\") pod \"etcd-operator-b45778765-lx7vs\" (UID: \"48fc8b4d-5a6b-40a8-acef-785097811718\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463897 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-service-ca\") pod \"console-f9d7485db-q4hqs\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463918 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463942 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4a32abae-914d-4102-9e89-817922ff06ca-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dqndk\" (UID: \"4a32abae-914d-4102-9e89-817922ff06ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqndk" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463963 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/755d86cd-4d90-41eb-8c62-b130143346aa-metrics-tls\") pod \"dns-operator-744455d44c-mcs5z\" (UID: \"755d86cd-4d90-41eb-8c62-b130143346aa\") " pod="openshift-dns-operator/dns-operator-744455d44c-mcs5z" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463979 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7bafc4cb-e5b7-4b39-9930-b885e403dfca-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mkhcp\" (UID: \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463987 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/48fc8b4d-5a6b-40a8-acef-785097811718-etcd-ca\") pod \"etcd-operator-b45778765-lx7vs\" (UID: \"48fc8b4d-5a6b-40a8-acef-785097811718\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464007 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464026 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvfqr\" (UniqueName: \"kubernetes.io/projected/755d86cd-4d90-41eb-8c62-b130143346aa-kube-api-access-vvfqr\") pod \"dns-operator-744455d44c-mcs5z\" (UID: \"755d86cd-4d90-41eb-8c62-b130143346aa\") " pod="openshift-dns-operator/dns-operator-744455d44c-mcs5z" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464077 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-oauth-serving-cert\") pod \"console-f9d7485db-q4hqs\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464113 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464146 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc4e2378-18bc-4624-acb0-a5010db62008-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464176 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8fe7a4ad-d825-4988-8390-c04b5a1b114c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g4946\" (UID: \"8fe7a4ad-d825-4988-8390-c04b5a1b114c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g4946" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464265 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfea954e-db56-4946-a178-3376d7793b46-serving-cert\") pod \"service-ca-operator-777779d784-k4wl6\" (UID: \"cfea954e-db56-4946-a178-3376d7793b46\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k4wl6" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464307 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464335 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v967n\" (UniqueName: \"kubernetes.io/projected/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-kube-api-access-v967n\") pod \"router-default-5444994796-dxfqc\" (UID: \"85b31f82-0a7a-466c-aa0f-bffb46f2b04c\") " pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464370 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464503 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bafc4cb-e5b7-4b39-9930-b885e403dfca-config\") pod \"controller-manager-879f6c89f-mkhcp\" (UID: \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464532 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bafc4cb-e5b7-4b39-9930-b885e403dfca-serving-cert\") pod \"controller-manager-879f6c89f-mkhcp\" (UID: \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464555 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6x8k\" (UniqueName: \"kubernetes.io/projected/7bafc4cb-e5b7-4b39-9930-b885e403dfca-kube-api-access-x6x8k\") pod \"controller-manager-879f6c89f-mkhcp\" (UID: \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464579 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48fc8b4d-5a6b-40a8-acef-785097811718-config\") pod \"etcd-operator-b45778765-lx7vs\" (UID: \"48fc8b4d-5a6b-40a8-acef-785097811718\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464582 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkqq9\" (UniqueName: \"kubernetes.io/projected/4a32abae-914d-4102-9e89-817922ff06ca-kube-api-access-xkqq9\") pod \"openshift-config-operator-7777fb866f-dqndk\" (UID: \"4a32abae-914d-4102-9e89-817922ff06ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqndk" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464609 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464817 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464841 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2a52a00-75ce-4094-bab7-913d6fbab1dc-console-serving-cert\") pod \"console-f9d7485db-q4hqs\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464860 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bc4e2378-18bc-4624-acb0-a5010db62008-audit-policies\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464909 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bc4e2378-18bc-4624-acb0-a5010db62008-etcd-client\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.463422 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xrt7q"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464972 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-txptg"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.464992 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-console-config\") pod \"console-f9d7485db-q4hqs\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.465015 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-trusted-ca-bundle\") pod \"console-f9d7485db-q4hqs\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.465194 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c10be0b3-7f40-4f17-8206-ab6257d4b23b-audit-dir\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.466469 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bafc4cb-e5b7-4b39-9930-b885e403dfca-config\") pod \"controller-manager-879f6c89f-mkhcp\" (UID: \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.466533 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48fc8b4d-5a6b-40a8-acef-785097811718-serving-cert\") pod \"etcd-operator-b45778765-lx7vs\" (UID: \"48fc8b4d-5a6b-40a8-acef-785097811718\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.467857 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7bafc4cb-e5b7-4b39-9930-b885e403dfca-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mkhcp\" (UID: \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.468304 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.468891 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.468924 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-audit-policies\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.469619 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/48fc8b4d-5a6b-40a8-acef-785097811718-etcd-client\") pod \"etcd-operator-b45778765-lx7vs\" (UID: \"48fc8b4d-5a6b-40a8-acef-785097811718\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.470535 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/48fc8b4d-5a6b-40a8-acef-785097811718-etcd-service-ca\") pod \"etcd-operator-b45778765-lx7vs\" (UID: \"48fc8b4d-5a6b-40a8-acef-785097811718\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.471181 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2a52a00-75ce-4094-bab7-913d6fbab1dc-console-serving-cert\") pod \"console-f9d7485db-q4hqs\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.471244 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g4946"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.471538 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dqndk"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.472021 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4a32abae-914d-4102-9e89-817922ff06ca-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dqndk\" (UID: \"4a32abae-914d-4102-9e89-817922ff06ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqndk" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.472485 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-service-ca\") pod \"console-f9d7485db-q4hqs\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.472962 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a32abae-914d-4102-9e89-817922ff06ca-serving-cert\") pod \"openshift-config-operator-7777fb866f-dqndk\" (UID: \"4a32abae-914d-4102-9e89-817922ff06ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqndk" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.474775 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c458ce-ffe4-4613-bb4a-0b5d0809519b-config\") pod \"console-operator-58897d9998-57nc5\" (UID: \"81c458ce-ffe4-4613-bb4a-0b5d0809519b\") " pod="openshift-console-operator/console-operator-58897d9998-57nc5" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.474943 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-oauth-serving-cert\") pod \"console-f9d7485db-q4hqs\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.475600 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7bafc4cb-e5b7-4b39-9930-b885e403dfca-client-ca\") pod \"controller-manager-879f6c89f-mkhcp\" (UID: \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.476041 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a2a52a00-75ce-4094-bab7-913d6fbab1dc-console-oauth-config\") pod \"console-f9d7485db-q4hqs\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.476096 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/68610615-718e-4fd3-a19b-9de01ef64a03-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5p4mg\" (UID: \"68610615-718e-4fd3-a19b-9de01ef64a03\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5p4mg" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.476099 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xxh4j"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.476161 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.476218 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.476413 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.476532 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/81c458ce-ffe4-4613-bb4a-0b5d0809519b-trusted-ca\") pod \"console-operator-58897d9998-57nc5\" (UID: \"81c458ce-ffe4-4613-bb4a-0b5d0809519b\") " pod="openshift-console-operator/console-operator-58897d9998-57nc5" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.476583 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.477456 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.477600 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.477674 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.480626 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vtzj8"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.482187 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.482944 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.483775 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lrw5m"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.485008 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81c458ce-ffe4-4613-bb4a-0b5d0809519b-serving-cert\") pod \"console-operator-58897d9998-57nc5\" (UID: \"81c458ce-ffe4-4613-bb4a-0b5d0809519b\") " pod="openshift-console-operator/console-operator-58897d9998-57nc5" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.485136 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.487599 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/755d86cd-4d90-41eb-8c62-b130143346aa-metrics-tls\") pod \"dns-operator-744455d44c-mcs5z\" (UID: \"755d86cd-4d90-41eb-8c62-b130143346aa\") " pod="openshift-dns-operator/dns-operator-744455d44c-mcs5z" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.487915 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bafc4cb-e5b7-4b39-9930-b885e403dfca-serving-cert\") pod \"controller-manager-879f6c89f-mkhcp\" (UID: \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.488187 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.489322 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fdwxr"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.492995 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbd4n"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.495734 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-sw4rn"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.500523 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8bbs"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.502030 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.503304 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-t79lc"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.503683 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.504198 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-t79lc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.505036 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mkhcp"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.506235 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-cxgr4"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.507724 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mrplz"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.509275 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-q4hqs"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.510695 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zhrf7"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.511995 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-k4wl6"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.513223 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s7psb"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.514684 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6qz7r"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.515899 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4rp2t"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.517454 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdh9d"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.518573 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-t79lc"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.519933 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.522374 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7h44q"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.523070 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.523753 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7h44q" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.524134 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-wlxzs"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.525427 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wlxzs" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.525892 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7h44q"] Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.544241 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.563570 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.567450 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc4e2378-18bc-4624-acb0-a5010db62008-serving-cert\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.567495 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bc4e2378-18bc-4624-acb0-a5010db62008-audit-dir\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.567551 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64cfc229-d23f-4303-a604-cd7be04f0bc3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bbd4n\" (UID: \"64cfc229-d23f-4303-a604-cd7be04f0bc3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbd4n" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.567586 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6vdd\" (UniqueName: \"kubernetes.io/projected/0e71f06b-5ae9-4606-922f-eedf9f8eefa6-kube-api-access-h6vdd\") pod \"multus-admission-controller-857f4d67dd-sw4rn\" (UID: \"0e71f06b-5ae9-4606-922f-eedf9f8eefa6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sw4rn" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.567613 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfea954e-db56-4946-a178-3376d7793b46-config\") pod \"service-ca-operator-777779d784-k4wl6\" (UID: \"cfea954e-db56-4946-a178-3376d7793b46\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k4wl6" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.567642 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-service-ca-bundle\") pod \"router-default-5444994796-dxfqc\" (UID: \"85b31f82-0a7a-466c-aa0f-bffb46f2b04c\") " pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.567685 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjcsw\" (UniqueName: \"kubernetes.io/projected/0591695e-7fc9-4d9c-a9a2-a5f44db74caa-kube-api-access-fjcsw\") pod \"machine-config-controller-84d6567774-cxgr4\" (UID: \"0591695e-7fc9-4d9c-a9a2-a5f44db74caa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxgr4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.567714 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq9h2\" (UniqueName: \"kubernetes.io/projected/cfea954e-db56-4946-a178-3376d7793b46-kube-api-access-kq9h2\") pod \"service-ca-operator-777779d784-k4wl6\" (UID: \"cfea954e-db56-4946-a178-3376d7793b46\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k4wl6" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.567743 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b13bebc0-a453-40cd-9611-43cf66b3dd53-config\") pod \"kube-controller-manager-operator-78b949d7b-xxh4j\" (UID: \"b13bebc0-a453-40cd-9611-43cf66b3dd53\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xxh4j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.567776 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf44f\" (UniqueName: \"kubernetes.io/projected/8fe7a4ad-d825-4988-8390-c04b5a1b114c-kube-api-access-sf44f\") pod \"olm-operator-6b444d44fb-g4946\" (UID: \"8fe7a4ad-d825-4988-8390-c04b5a1b114c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g4946" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.567802 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bc4e2378-18bc-4624-acb0-a5010db62008-encryption-config\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.567828 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b13bebc0-a453-40cd-9611-43cf66b3dd53-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xxh4j\" (UID: \"b13bebc0-a453-40cd-9611-43cf66b3dd53\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xxh4j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.567909 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc4e2378-18bc-4624-acb0-a5010db62008-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.567935 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8fe7a4ad-d825-4988-8390-c04b5a1b114c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g4946\" (UID: \"8fe7a4ad-d825-4988-8390-c04b5a1b114c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g4946" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.568001 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfea954e-db56-4946-a178-3376d7793b46-serving-cert\") pod \"service-ca-operator-777779d784-k4wl6\" (UID: \"cfea954e-db56-4946-a178-3376d7793b46\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k4wl6" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.568033 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v967n\" (UniqueName: \"kubernetes.io/projected/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-kube-api-access-v967n\") pod \"router-default-5444994796-dxfqc\" (UID: \"85b31f82-0a7a-466c-aa0f-bffb46f2b04c\") " pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.568086 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bc4e2378-18bc-4624-acb0-a5010db62008-audit-policies\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.568116 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bc4e2378-18bc-4624-acb0-a5010db62008-etcd-client\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.568148 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bc4e2378-18bc-4624-acb0-a5010db62008-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.568242 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-default-certificate\") pod \"router-default-5444994796-dxfqc\" (UID: \"85b31f82-0a7a-466c-aa0f-bffb46f2b04c\") " pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.568276 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0591695e-7fc9-4d9c-a9a2-a5f44db74caa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-cxgr4\" (UID: \"0591695e-7fc9-4d9c-a9a2-a5f44db74caa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxgr4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.568627 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0591695e-7fc9-4d9c-a9a2-a5f44db74caa-proxy-tls\") pod \"machine-config-controller-84d6567774-cxgr4\" (UID: \"0591695e-7fc9-4d9c-a9a2-a5f44db74caa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxgr4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.569954 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b13bebc0-a453-40cd-9611-43cf66b3dd53-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xxh4j\" (UID: \"b13bebc0-a453-40cd-9611-43cf66b3dd53\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xxh4j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.569999 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc9tz\" (UniqueName: \"kubernetes.io/projected/bc4e2378-18bc-4624-acb0-a5010db62008-kube-api-access-zc9tz\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.570034 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-metrics-certs\") pod \"router-default-5444994796-dxfqc\" (UID: \"85b31f82-0a7a-466c-aa0f-bffb46f2b04c\") " pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.570076 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0e71f06b-5ae9-4606-922f-eedf9f8eefa6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-sw4rn\" (UID: \"0e71f06b-5ae9-4606-922f-eedf9f8eefa6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sw4rn" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.570108 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64cfc229-d23f-4303-a604-cd7be04f0bc3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bbd4n\" (UID: \"64cfc229-d23f-4303-a604-cd7be04f0bc3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbd4n" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.570138 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8fe7a4ad-d825-4988-8390-c04b5a1b114c-srv-cert\") pod \"olm-operator-6b444d44fb-g4946\" (UID: \"8fe7a4ad-d825-4988-8390-c04b5a1b114c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g4946" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.570188 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64cfc229-d23f-4303-a604-cd7be04f0bc3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bbd4n\" (UID: \"64cfc229-d23f-4303-a604-cd7be04f0bc3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbd4n" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.570248 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-stats-auth\") pod \"router-default-5444994796-dxfqc\" (UID: \"85b31f82-0a7a-466c-aa0f-bffb46f2b04c\") " pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.570323 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bc4e2378-18bc-4624-acb0-a5010db62008-audit-policies\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.568834 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bc4e2378-18bc-4624-acb0-a5010db62008-audit-dir\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.569860 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0591695e-7fc9-4d9c-a9a2-a5f44db74caa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-cxgr4\" (UID: \"0591695e-7fc9-4d9c-a9a2-a5f44db74caa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxgr4" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.569050 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bc4e2378-18bc-4624-acb0-a5010db62008-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.569459 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc4e2378-18bc-4624-acb0-a5010db62008-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.571418 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bc4e2378-18bc-4624-acb0-a5010db62008-encryption-config\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.573892 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc4e2378-18bc-4624-acb0-a5010db62008-serving-cert\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.576652 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.584729 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.603599 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.637673 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgjfp\" (UniqueName: \"kubernetes.io/projected/6e659a59-19ab-4c91-98ec-db3042ac1d4b-kube-api-access-qgjfp\") pod \"downloads-7954f5f757-sgzkm\" (UID: \"6e659a59-19ab-4c91-98ec-db3042ac1d4b\") " pod="openshift-console/downloads-7954f5f757-sgzkm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.661074 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn4jn\" (UniqueName: \"kubernetes.io/projected/899b03ec-0d91-4793-a5a2-d3aca48e5309-kube-api-access-hn4jn\") pod \"machine-api-operator-5694c8668f-h2fxz\" (UID: \"899b03ec-0d91-4793-a5a2-d3aca48e5309\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h2fxz" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.679612 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgk44\" (UniqueName: \"kubernetes.io/projected/05a0bcfb-cbdb-4f6a-bd21-89c876dc7635-kube-api-access-zgk44\") pod \"machine-approver-56656f9798-fm67q\" (UID: \"05a0bcfb-cbdb-4f6a-bd21-89c876dc7635\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fm67q" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.688119 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s4zhc" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.696108 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bc4e2378-18bc-4624-acb0-a5010db62008-etcd-client\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.702155 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g88f5\" (UniqueName: \"kubernetes.io/projected/654e373c-b142-475e-8ab8-7644f9b0d73c-kube-api-access-g88f5\") pod \"authentication-operator-69f744f599-z6phb\" (UID: \"654e373c-b142-475e-8ab8-7644f9b0d73c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z6phb" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.720143 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-sgzkm" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.733496 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2gdp\" (UniqueName: \"kubernetes.io/projected/893053b3-df21-4683-a51a-bf12b3bed27d-kube-api-access-z2gdp\") pod \"route-controller-manager-6576b87f9c-58p4m\" (UID: \"893053b3-df21-4683-a51a-bf12b3bed27d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.743221 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.744489 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9868\" (UniqueName: \"kubernetes.io/projected/e9108c46-95f6-4d7e-879e-a4354473f51f-kube-api-access-z9868\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4fq\" (UID: \"e9108c46-95f6-4d7e-879e-a4354473f51f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4fq" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.763364 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.783489 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.803348 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.823240 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.844690 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.852205 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-h2fxz" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.864222 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.884622 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.892612 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.904398 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.923228 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.924217 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0e71f06b-5ae9-4606-922f-eedf9f8eefa6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-sw4rn\" (UID: \"0e71f06b-5ae9-4606-922f-eedf9f8eefa6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sw4rn" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.940764 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fm67q" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.943459 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8fe7a4ad-d825-4988-8390-c04b5a1b114c-srv-cert\") pod \"olm-operator-6b444d44fb-g4946\" (UID: \"8fe7a4ad-d825-4988-8390-c04b5a1b114c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g4946" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.943820 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.954133 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8fe7a4ad-d825-4988-8390-c04b5a1b114c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g4946\" (UID: \"8fe7a4ad-d825-4988-8390-c04b5a1b114c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g4946" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.972250 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.984653 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 03:49:07 crc kubenswrapper[4827]: I0131 03:49:07.997992 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-z6phb" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.004411 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.012205 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4fq" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.023809 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.043749 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.065488 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.084598 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-x2j9j"] Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.087267 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.104089 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.124054 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.146487 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.164697 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.176122 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-h2fxz"] Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.183545 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 03:49:08 crc kubenswrapper[4827]: W0131 03:49:08.186624 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod899b03ec_0d91_4793_a5a2_d3aca48e5309.slice/crio-5b093de056a611afc33b3986cefb816994489849ad72183130b10be8e61e0640 WatchSource:0}: Error finding container 5b093de056a611afc33b3986cefb816994489849ad72183130b10be8e61e0640: Status 404 returned error can't find the container with id 5b093de056a611afc33b3986cefb816994489849ad72183130b10be8e61e0640 Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.196690 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m"] Jan 31 03:49:08 crc kubenswrapper[4827]: W0131 03:49:08.203536 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod893053b3_df21_4683_a51a_bf12b3bed27d.slice/crio-d09b5b734c9b5f01cb62f338caa13724323ef31c7cfb1303acb26bda5f0afe2a WatchSource:0}: Error finding container d09b5b734c9b5f01cb62f338caa13724323ef31c7cfb1303acb26bda5f0afe2a: Status 404 returned error can't find the container with id d09b5b734c9b5f01cb62f338caa13724323ef31c7cfb1303acb26bda5f0afe2a Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.212274 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.222931 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.242744 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.248435 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4fq"] Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.253595 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0591695e-7fc9-4d9c-a9a2-a5f44db74caa-proxy-tls\") pod \"machine-config-controller-84d6567774-cxgr4\" (UID: \"0591695e-7fc9-4d9c-a9a2-a5f44db74caa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxgr4" Jan 31 03:49:08 crc kubenswrapper[4827]: W0131 03:49:08.258088 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9108c46_95f6_4d7e_879e_a4354473f51f.slice/crio-728bd2ee3eaf979f658e041023f74e13e7c08c562d211c0637b74b2c0506db38 WatchSource:0}: Error finding container 728bd2ee3eaf979f658e041023f74e13e7c08c562d211c0637b74b2c0506db38: Status 404 returned error can't find the container with id 728bd2ee3eaf979f658e041023f74e13e7c08c562d211c0637b74b2c0506db38 Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.263958 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.298200 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s4zhc"] Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.299183 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-sgzkm"] Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.303058 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 03:49:08 crc kubenswrapper[4827]: W0131 03:49:08.306717 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e659a59_19ab_4c91_98ec_db3042ac1d4b.slice/crio-af2a4f4c95540bf56ff942618a22fc6fac61b8f0719d05938d579285d1ee1d79 WatchSource:0}: Error finding container af2a4f4c95540bf56ff942618a22fc6fac61b8f0719d05938d579285d1ee1d79: Status 404 returned error can't find the container with id af2a4f4c95540bf56ff942618a22fc6fac61b8f0719d05938d579285d1ee1d79 Jan 31 03:49:08 crc kubenswrapper[4827]: W0131 03:49:08.308984 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3544285_7727_4eac_b8ed_2e00b26823c7.slice/crio-a26337a605c5d8ad94ae0b5e2795c5ec1e6e9a0bfa5a49d33a14aae90878e2b2 WatchSource:0}: Error finding container a26337a605c5d8ad94ae0b5e2795c5ec1e6e9a0bfa5a49d33a14aae90878e2b2: Status 404 returned error can't find the container with id a26337a605c5d8ad94ae0b5e2795c5ec1e6e9a0bfa5a49d33a14aae90878e2b2 Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.324324 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.337717 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b13bebc0-a453-40cd-9611-43cf66b3dd53-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xxh4j\" (UID: \"b13bebc0-a453-40cd-9611-43cf66b3dd53\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xxh4j" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.342637 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.363258 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.371769 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b13bebc0-a453-40cd-9611-43cf66b3dd53-config\") pod \"kube-controller-manager-operator-78b949d7b-xxh4j\" (UID: \"b13bebc0-a453-40cd-9611-43cf66b3dd53\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xxh4j" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.383933 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.402616 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.428978 4827 request.go:700] Waited for 1.012147092s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&limit=500&resourceVersion=0 Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.430692 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.442744 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.459774 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-z6phb"] Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.464967 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.483529 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 03:49:08 crc kubenswrapper[4827]: W0131 03:49:08.491422 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod654e373c_b142_475e_8ab8_7644f9b0d73c.slice/crio-2d5bc3d802437cc0acf30e044829baa28d2522eee69c7362e5665c6d67097b17 WatchSource:0}: Error finding container 2d5bc3d802437cc0acf30e044829baa28d2522eee69c7362e5665c6d67097b17: Status 404 returned error can't find the container with id 2d5bc3d802437cc0acf30e044829baa28d2522eee69c7362e5665c6d67097b17 Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.510736 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.523955 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.544369 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.564940 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 03:49:08 crc kubenswrapper[4827]: E0131 03:49:08.568699 4827 secret.go:188] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Jan 31 03:49:08 crc kubenswrapper[4827]: E0131 03:49:08.568773 4827 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 31 03:49:08 crc kubenswrapper[4827]: E0131 03:49:08.568798 4827 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 31 03:49:08 crc kubenswrapper[4827]: E0131 03:49:08.568807 4827 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 31 03:49:08 crc kubenswrapper[4827]: E0131 03:49:08.568828 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-default-certificate podName:85b31f82-0a7a-466c-aa0f-bffb46f2b04c nodeName:}" failed. No retries permitted until 2026-01-31 03:49:09.068785763 +0000 UTC m=+141.755866212 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-default-certificate") pod "router-default-5444994796-dxfqc" (UID: "85b31f82-0a7a-466c-aa0f-bffb46f2b04c") : failed to sync secret cache: timed out waiting for the condition Jan 31 03:49:08 crc kubenswrapper[4827]: E0131 03:49:08.568929 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfea954e-db56-4946-a178-3376d7793b46-serving-cert podName:cfea954e-db56-4946-a178-3376d7793b46 nodeName:}" failed. No retries permitted until 2026-01-31 03:49:09.068916917 +0000 UTC m=+141.755997366 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cfea954e-db56-4946-a178-3376d7793b46-serving-cert") pod "service-ca-operator-777779d784-k4wl6" (UID: "cfea954e-db56-4946-a178-3376d7793b46") : failed to sync secret cache: timed out waiting for the condition Jan 31 03:49:08 crc kubenswrapper[4827]: E0131 03:49:08.568943 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-service-ca-bundle podName:85b31f82-0a7a-466c-aa0f-bffb46f2b04c nodeName:}" failed. No retries permitted until 2026-01-31 03:49:09.068936618 +0000 UTC m=+141.756017067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-service-ca-bundle") pod "router-default-5444994796-dxfqc" (UID: "85b31f82-0a7a-466c-aa0f-bffb46f2b04c") : failed to sync configmap cache: timed out waiting for the condition Jan 31 03:49:08 crc kubenswrapper[4827]: E0131 03:49:08.568955 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cfea954e-db56-4946-a178-3376d7793b46-config podName:cfea954e-db56-4946-a178-3376d7793b46 nodeName:}" failed. No retries permitted until 2026-01-31 03:49:09.068950408 +0000 UTC m=+141.756030857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/cfea954e-db56-4946-a178-3376d7793b46-config") pod "service-ca-operator-777779d784-k4wl6" (UID: "cfea954e-db56-4946-a178-3376d7793b46") : failed to sync configmap cache: timed out waiting for the condition Jan 31 03:49:08 crc kubenswrapper[4827]: E0131 03:49:08.570985 4827 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 31 03:49:08 crc kubenswrapper[4827]: E0131 03:49:08.571007 4827 secret.go:188] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 31 03:49:08 crc kubenswrapper[4827]: E0131 03:49:08.571023 4827 secret.go:188] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Jan 31 03:49:08 crc kubenswrapper[4827]: E0131 03:49:08.571076 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64cfc229-d23f-4303-a604-cd7be04f0bc3-serving-cert podName:64cfc229-d23f-4303-a604-cd7be04f0bc3 nodeName:}" failed. No retries permitted until 2026-01-31 03:49:09.071040718 +0000 UTC m=+141.758121177 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/64cfc229-d23f-4303-a604-cd7be04f0bc3-serving-cert") pod "openshift-kube-scheduler-operator-5fdd9b5758-bbd4n" (UID: "64cfc229-d23f-4303-a604-cd7be04f0bc3") : failed to sync secret cache: timed out waiting for the condition Jan 31 03:49:08 crc kubenswrapper[4827]: E0131 03:49:08.571088 4827 secret.go:188] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Jan 31 03:49:08 crc kubenswrapper[4827]: E0131 03:49:08.571098 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/64cfc229-d23f-4303-a604-cd7be04f0bc3-config podName:64cfc229-d23f-4303-a604-cd7be04f0bc3 nodeName:}" failed. No retries permitted until 2026-01-31 03:49:09.07108902 +0000 UTC m=+141.758169529 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/64cfc229-d23f-4303-a604-cd7be04f0bc3-config") pod "openshift-kube-scheduler-operator-5fdd9b5758-bbd4n" (UID: "64cfc229-d23f-4303-a604-cd7be04f0bc3") : failed to sync configmap cache: timed out waiting for the condition Jan 31 03:49:08 crc kubenswrapper[4827]: E0131 03:49:08.571117 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-stats-auth podName:85b31f82-0a7a-466c-aa0f-bffb46f2b04c nodeName:}" failed. No retries permitted until 2026-01-31 03:49:09.071108591 +0000 UTC m=+141.758189150 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-stats-auth") pod "router-default-5444994796-dxfqc" (UID: "85b31f82-0a7a-466c-aa0f-bffb46f2b04c") : failed to sync secret cache: timed out waiting for the condition Jan 31 03:49:08 crc kubenswrapper[4827]: E0131 03:49:08.571164 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-metrics-certs podName:85b31f82-0a7a-466c-aa0f-bffb46f2b04c nodeName:}" failed. No retries permitted until 2026-01-31 03:49:09.071152692 +0000 UTC m=+141.758233171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-metrics-certs") pod "router-default-5444994796-dxfqc" (UID: "85b31f82-0a7a-466c-aa0f-bffb46f2b04c") : failed to sync secret cache: timed out waiting for the condition Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.586748 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.603622 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.629014 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.643230 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.666411 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.682728 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.703434 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.724103 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.744584 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.764527 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.783672 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.804446 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.823527 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.844161 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.863794 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.884523 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.900062 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fm67q" event={"ID":"05a0bcfb-cbdb-4f6a-bd21-89c876dc7635","Type":"ContainerStarted","Data":"65d2eebb19075acd212bb8878c6a3fcfeef5aa880e35672d61fdf38cc45a3c64"} Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.900127 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fm67q" event={"ID":"05a0bcfb-cbdb-4f6a-bd21-89c876dc7635","Type":"ContainerStarted","Data":"a878ffd76719b5a23b8fc6a0e11528ba424f43b751a18e4fdbfd15119c68fd9e"} Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.900146 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fm67q" event={"ID":"05a0bcfb-cbdb-4f6a-bd21-89c876dc7635","Type":"ContainerStarted","Data":"fa983b2f6033a26b68fad8855481d5878f07dff553320c8359724bf1ad6585cb"} Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.903500 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-h2fxz" event={"ID":"899b03ec-0d91-4793-a5a2-d3aca48e5309","Type":"ContainerStarted","Data":"89c8913d765dab6852c458ca0379e5439b9b3cb80cb534b49306a86c1835c286"} Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.903540 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-h2fxz" event={"ID":"899b03ec-0d91-4793-a5a2-d3aca48e5309","Type":"ContainerStarted","Data":"e7c7c4e856cb8c31440f5280b6c672001676c8eaa1231e62506ba2e393e7afdb"} Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.903555 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-h2fxz" event={"ID":"899b03ec-0d91-4793-a5a2-d3aca48e5309","Type":"ContainerStarted","Data":"5b093de056a611afc33b3986cefb816994489849ad72183130b10be8e61e0640"} Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.903963 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.905789 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4fq" event={"ID":"e9108c46-95f6-4d7e-879e-a4354473f51f","Type":"ContainerStarted","Data":"e85f259f3096bf532097a57a62ab236e5172b6acce1911900efe66779e807d83"} Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.905831 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4fq" event={"ID":"e9108c46-95f6-4d7e-879e-a4354473f51f","Type":"ContainerStarted","Data":"728bd2ee3eaf979f658e041023f74e13e7c08c562d211c0637b74b2c0506db38"} Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.908257 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-z6phb" event={"ID":"654e373c-b142-475e-8ab8-7644f9b0d73c","Type":"ContainerStarted","Data":"6293d363d502a6696b573a907263423da3813ae3d78b4e21325e8e3ef6faafd4"} Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.908299 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-z6phb" event={"ID":"654e373c-b142-475e-8ab8-7644f9b0d73c","Type":"ContainerStarted","Data":"2d5bc3d802437cc0acf30e044829baa28d2522eee69c7362e5665c6d67097b17"} Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.911016 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" event={"ID":"893053b3-df21-4683-a51a-bf12b3bed27d","Type":"ContainerStarted","Data":"d306765b467d8f21b503a07a80398acb56f84225397694be9390048043d6fca9"} Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.911088 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" event={"ID":"893053b3-df21-4683-a51a-bf12b3bed27d","Type":"ContainerStarted","Data":"d09b5b734c9b5f01cb62f338caa13724323ef31c7cfb1303acb26bda5f0afe2a"} Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.911694 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.913987 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sgzkm" event={"ID":"6e659a59-19ab-4c91-98ec-db3042ac1d4b","Type":"ContainerStarted","Data":"0ca47e3203274df4706f884ad4d32fda3bc312accf61c75580e5f769317077d6"} Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.914078 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sgzkm" event={"ID":"6e659a59-19ab-4c91-98ec-db3042ac1d4b","Type":"ContainerStarted","Data":"af2a4f4c95540bf56ff942618a22fc6fac61b8f0719d05938d579285d1ee1d79"} Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.914381 4827 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-58p4m container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.914466 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" podUID="893053b3-df21-4683-a51a-bf12b3bed27d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.914699 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-sgzkm" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.916678 4827 generic.go:334] "Generic (PLEG): container finished" podID="e0e07dc1-cd63-49ad-8a43-7a15027a1e74" containerID="50fe1fe0eac05997e8d00416acdde7771b5c1dc41417183cacada187818443ad" exitCode=0 Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.916789 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" event={"ID":"e0e07dc1-cd63-49ad-8a43-7a15027a1e74","Type":"ContainerDied","Data":"50fe1fe0eac05997e8d00416acdde7771b5c1dc41417183cacada187818443ad"} Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.916826 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" event={"ID":"e0e07dc1-cd63-49ad-8a43-7a15027a1e74","Type":"ContainerStarted","Data":"8dbe87cf67e25e17de6f0bcac799aec4b86fc1991d6585795c051f0a6532d6da"} Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.916694 4827 patch_prober.go:28] interesting pod/downloads-7954f5f757-sgzkm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.916922 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sgzkm" podUID="6e659a59-19ab-4c91-98ec-db3042ac1d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.919217 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s4zhc" event={"ID":"b3544285-7727-4eac-b8ed-2e00b26823c7","Type":"ContainerStarted","Data":"ebb7dd6516f02bedb55c426a6cc0d7c2bfb40880a0181356b25cf3f46ebf51ae"} Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.919280 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s4zhc" event={"ID":"b3544285-7727-4eac-b8ed-2e00b26823c7","Type":"ContainerStarted","Data":"a26337a605c5d8ad94ae0b5e2795c5ec1e6e9a0bfa5a49d33a14aae90878e2b2"} Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.925192 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.943557 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 03:49:08 crc kubenswrapper[4827]: I0131 03:49:08.984093 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.003801 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.022833 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.044034 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.063865 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.083653 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.093761 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfea954e-db56-4946-a178-3376d7793b46-serving-cert\") pod \"service-ca-operator-777779d784-k4wl6\" (UID: \"cfea954e-db56-4946-a178-3376d7793b46\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k4wl6" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.093943 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-default-certificate\") pod \"router-default-5444994796-dxfqc\" (UID: \"85b31f82-0a7a-466c-aa0f-bffb46f2b04c\") " pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.094006 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-metrics-certs\") pod \"router-default-5444994796-dxfqc\" (UID: \"85b31f82-0a7a-466c-aa0f-bffb46f2b04c\") " pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.094037 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64cfc229-d23f-4303-a604-cd7be04f0bc3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bbd4n\" (UID: \"64cfc229-d23f-4303-a604-cd7be04f0bc3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbd4n" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.094072 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64cfc229-d23f-4303-a604-cd7be04f0bc3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bbd4n\" (UID: \"64cfc229-d23f-4303-a604-cd7be04f0bc3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbd4n" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.094134 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-stats-auth\") pod \"router-default-5444994796-dxfqc\" (UID: \"85b31f82-0a7a-466c-aa0f-bffb46f2b04c\") " pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.095068 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfea954e-db56-4946-a178-3376d7793b46-config\") pod \"service-ca-operator-777779d784-k4wl6\" (UID: \"cfea954e-db56-4946-a178-3376d7793b46\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k4wl6" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.095142 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-service-ca-bundle\") pod \"router-default-5444994796-dxfqc\" (UID: \"85b31f82-0a7a-466c-aa0f-bffb46f2b04c\") " pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.096439 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-service-ca-bundle\") pod \"router-default-5444994796-dxfqc\" (UID: \"85b31f82-0a7a-466c-aa0f-bffb46f2b04c\") " pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.096583 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfea954e-db56-4946-a178-3376d7793b46-config\") pod \"service-ca-operator-777779d784-k4wl6\" (UID: \"cfea954e-db56-4946-a178-3376d7793b46\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k4wl6" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.097058 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64cfc229-d23f-4303-a604-cd7be04f0bc3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bbd4n\" (UID: \"64cfc229-d23f-4303-a604-cd7be04f0bc3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbd4n" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.100770 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfea954e-db56-4946-a178-3376d7793b46-serving-cert\") pod \"service-ca-operator-777779d784-k4wl6\" (UID: \"cfea954e-db56-4946-a178-3376d7793b46\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k4wl6" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.100979 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-stats-auth\") pod \"router-default-5444994796-dxfqc\" (UID: \"85b31f82-0a7a-466c-aa0f-bffb46f2b04c\") " pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.102059 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-default-certificate\") pod \"router-default-5444994796-dxfqc\" (UID: \"85b31f82-0a7a-466c-aa0f-bffb46f2b04c\") " pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.102313 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-metrics-certs\") pod \"router-default-5444994796-dxfqc\" (UID: \"85b31f82-0a7a-466c-aa0f-bffb46f2b04c\") " pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.103399 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64cfc229-d23f-4303-a604-cd7be04f0bc3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bbd4n\" (UID: \"64cfc229-d23f-4303-a604-cd7be04f0bc3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbd4n" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.105363 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.125313 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.144255 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.163280 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.184736 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.202724 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.224404 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.244452 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.264801 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.301584 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9qhm\" (UniqueName: \"kubernetes.io/projected/81c458ce-ffe4-4613-bb4a-0b5d0809519b-kube-api-access-w9qhm\") pod \"console-operator-58897d9998-57nc5\" (UID: \"81c458ce-ffe4-4613-bb4a-0b5d0809519b\") " pod="openshift-console-operator/console-operator-58897d9998-57nc5" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.320390 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9wqf\" (UniqueName: \"kubernetes.io/projected/c10be0b3-7f40-4f17-8206-ab6257d4b23b-kube-api-access-b9wqf\") pod \"oauth-openshift-558db77b4-qtqj4\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.343623 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkqq9\" (UniqueName: \"kubernetes.io/projected/4a32abae-914d-4102-9e89-817922ff06ca-kube-api-access-xkqq9\") pod \"openshift-config-operator-7777fb866f-dqndk\" (UID: \"4a32abae-914d-4102-9e89-817922ff06ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqndk" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.369060 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6x8k\" (UniqueName: \"kubernetes.io/projected/7bafc4cb-e5b7-4b39-9930-b885e403dfca-kube-api-access-x6x8k\") pod \"controller-manager-879f6c89f-mkhcp\" (UID: \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.379379 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlnx7\" (UniqueName: \"kubernetes.io/projected/68610615-718e-4fd3-a19b-9de01ef64a03-kube-api-access-xlnx7\") pod \"cluster-samples-operator-665b6dd947-5p4mg\" (UID: \"68610615-718e-4fd3-a19b-9de01ef64a03\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5p4mg" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.403393 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj7b4\" (UniqueName: \"kubernetes.io/projected/a2a52a00-75ce-4094-bab7-913d6fbab1dc-kube-api-access-rj7b4\") pod \"console-f9d7485db-q4hqs\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.433492 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvfqr\" (UniqueName: \"kubernetes.io/projected/755d86cd-4d90-41eb-8c62-b130143346aa-kube-api-access-vvfqr\") pod \"dns-operator-744455d44c-mcs5z\" (UID: \"755d86cd-4d90-41eb-8c62-b130143346aa\") " pod="openshift-dns-operator/dns-operator-744455d44c-mcs5z" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.441910 4827 request.go:700] Waited for 1.937417415s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&limit=500&resourceVersion=0 Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.442527 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg2zt\" (UniqueName: \"kubernetes.io/projected/48fc8b4d-5a6b-40a8-acef-785097811718-kube-api-access-lg2zt\") pod \"etcd-operator-b45778765-lx7vs\" (UID: \"48fc8b4d-5a6b-40a8-acef-785097811718\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.444318 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.465700 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.484009 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.503470 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.523652 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.544170 4827 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.546010 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqndk" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.555310 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.564085 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5p4mg" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.565293 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.574246 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-57nc5" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.582667 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mcs5z" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.584492 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.591132 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.596241 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.602980 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.604619 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.627446 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.669726 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq9h2\" (UniqueName: \"kubernetes.io/projected/cfea954e-db56-4946-a178-3376d7793b46-kube-api-access-kq9h2\") pod \"service-ca-operator-777779d784-k4wl6\" (UID: \"cfea954e-db56-4946-a178-3376d7793b46\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k4wl6" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.683373 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjcsw\" (UniqueName: \"kubernetes.io/projected/0591695e-7fc9-4d9c-a9a2-a5f44db74caa-kube-api-access-fjcsw\") pod \"machine-config-controller-84d6567774-cxgr4\" (UID: \"0591695e-7fc9-4d9c-a9a2-a5f44db74caa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxgr4" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.699264 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf44f\" (UniqueName: \"kubernetes.io/projected/8fe7a4ad-d825-4988-8390-c04b5a1b114c-kube-api-access-sf44f\") pod \"olm-operator-6b444d44fb-g4946\" (UID: \"8fe7a4ad-d825-4988-8390-c04b5a1b114c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g4946" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.724610 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6vdd\" (UniqueName: \"kubernetes.io/projected/0e71f06b-5ae9-4606-922f-eedf9f8eefa6-kube-api-access-h6vdd\") pod \"multus-admission-controller-857f4d67dd-sw4rn\" (UID: \"0e71f06b-5ae9-4606-922f-eedf9f8eefa6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sw4rn" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.738135 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-k4wl6" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.740038 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b13bebc0-a453-40cd-9611-43cf66b3dd53-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xxh4j\" (UID: \"b13bebc0-a453-40cd-9611-43cf66b3dd53\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xxh4j" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.772717 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v967n\" (UniqueName: \"kubernetes.io/projected/85b31f82-0a7a-466c-aa0f-bffb46f2b04c-kube-api-access-v967n\") pod \"router-default-5444994796-dxfqc\" (UID: \"85b31f82-0a7a-466c-aa0f-bffb46f2b04c\") " pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.796101 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64cfc229-d23f-4303-a604-cd7be04f0bc3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bbd4n\" (UID: \"64cfc229-d23f-4303-a604-cd7be04f0bc3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbd4n" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.801525 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc9tz\" (UniqueName: \"kubernetes.io/projected/bc4e2378-18bc-4624-acb0-a5010db62008-kube-api-access-zc9tz\") pod \"apiserver-7bbb656c7d-hrjxm\" (UID: \"bc4e2378-18bc-4624-acb0-a5010db62008\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.866307 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dqndk"] Jan 31 03:49:09 crc kubenswrapper[4827]: W0131 03:49:09.905059 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a32abae_914d_4102_9e89_817922ff06ca.slice/crio-ba07adfcba0a71c184175c827f3ea62b122e84235f83800de45097a62669d22a WatchSource:0}: Error finding container ba07adfcba0a71c184175c827f3ea62b122e84235f83800de45097a62669d22a: Status 404 returned error can't find the container with id ba07adfcba0a71c184175c827f3ea62b122e84235f83800de45097a62669d22a Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.905999 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76bww\" (UniqueName: \"kubernetes.io/projected/cc0facf8-c192-4df4-bb9b-68f123fd7b21-kube-api-access-76bww\") pod \"marketplace-operator-79b997595-lrw5m\" (UID: \"cc0facf8-c192-4df4-bb9b-68f123fd7b21\") " pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.906093 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04eac770-8ff7-453b-a1da-b028636b909c-bound-sa-token\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.906126 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e9d269d7-f93e-4959-8e00-b541a0f9d9c2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vtzj8\" (UID: \"e9d269d7-f93e-4959-8e00-b541a0f9d9c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vtzj8" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908338 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8q46\" (UniqueName: \"kubernetes.io/projected/6cf6cb35-7f31-44f5-ba34-be81eb109101-kube-api-access-w8q46\") pod \"package-server-manager-789f6589d5-tbsqf\" (UID: \"6cf6cb35-7f31-44f5-ba34-be81eb109101\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbsqf" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908376 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9782bce9-dbc2-4734-929d-b6ef25dd752e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-txptg\" (UID: \"9782bce9-dbc2-4734-929d-b6ef25dd752e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-txptg" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908400 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04eac770-8ff7-453b-a1da-b028636b909c-trusted-ca\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908421 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a1eac826-76c5-435a-abd1-16f7fe35350f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4rp2t\" (UID: \"a1eac826-76c5-435a-abd1-16f7fe35350f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4rp2t" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908441 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhnj4\" (UniqueName: \"kubernetes.io/projected/63af2706-4f52-4d40-941f-7575394bfefa-kube-api-access-rhnj4\") pod \"migrator-59844c95c7-xrt7q\" (UID: \"63af2706-4f52-4d40-941f-7575394bfefa\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xrt7q" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908501 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908521 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/480bfdcf-e687-4ecc-b22f-545d3d2a41ac-signing-cabundle\") pod \"service-ca-9c57cc56f-fdwxr\" (UID: \"480bfdcf-e687-4ecc-b22f-545d3d2a41ac\") " pod="openshift-service-ca/service-ca-9c57cc56f-fdwxr" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908552 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/480bfdcf-e687-4ecc-b22f-545d3d2a41ac-signing-key\") pod \"service-ca-9c57cc56f-fdwxr\" (UID: \"480bfdcf-e687-4ecc-b22f-545d3d2a41ac\") " pod="openshift-service-ca/service-ca-9c57cc56f-fdwxr" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908572 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85e64cbb-2da9-4b05-b074-fabf16790f49-config-volume\") pod \"collect-profiles-29497185-4tnfv\" (UID: \"85e64cbb-2da9-4b05-b074-fabf16790f49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908587 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/beb2ca1a-f741-4cd2-8ae8-2a61972cd841-srv-cert\") pod \"catalog-operator-68c6474976-g8bbs\" (UID: \"beb2ca1a-f741-4cd2-8ae8-2a61972cd841\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8bbs" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908605 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bwrl\" (UniqueName: \"kubernetes.io/projected/480bfdcf-e687-4ecc-b22f-545d3d2a41ac-kube-api-access-7bwrl\") pod \"service-ca-9c57cc56f-fdwxr\" (UID: \"480bfdcf-e687-4ecc-b22f-545d3d2a41ac\") " pod="openshift-service-ca/service-ca-9c57cc56f-fdwxr" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908655 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cf6cb35-7f31-44f5-ba34-be81eb109101-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tbsqf\" (UID: \"6cf6cb35-7f31-44f5-ba34-be81eb109101\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbsqf" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908674 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc0facf8-c192-4df4-bb9b-68f123fd7b21-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lrw5m\" (UID: \"cc0facf8-c192-4df4-bb9b-68f123fd7b21\") " pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908698 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a1eac826-76c5-435a-abd1-16f7fe35350f-proxy-tls\") pod \"machine-config-operator-74547568cd-4rp2t\" (UID: \"a1eac826-76c5-435a-abd1-16f7fe35350f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4rp2t" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908730 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z69df\" (UniqueName: \"kubernetes.io/projected/a1eac826-76c5-435a-abd1-16f7fe35350f-kube-api-access-z69df\") pod \"machine-config-operator-74547568cd-4rp2t\" (UID: \"a1eac826-76c5-435a-abd1-16f7fe35350f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4rp2t" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908763 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjncg\" (UniqueName: \"kubernetes.io/projected/e9d269d7-f93e-4959-8e00-b541a0f9d9c2-kube-api-access-kjncg\") pod \"ingress-operator-5b745b69d9-vtzj8\" (UID: \"e9d269d7-f93e-4959-8e00-b541a0f9d9c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vtzj8" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908805 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/04eac770-8ff7-453b-a1da-b028636b909c-registry-certificates\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908822 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a1eac826-76c5-435a-abd1-16f7fe35350f-images\") pod \"machine-config-operator-74547568cd-4rp2t\" (UID: \"a1eac826-76c5-435a-abd1-16f7fe35350f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4rp2t" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908862 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e9d269d7-f93e-4959-8e00-b541a0f9d9c2-metrics-tls\") pod \"ingress-operator-5b745b69d9-vtzj8\" (UID: \"e9d269d7-f93e-4959-8e00-b541a0f9d9c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vtzj8" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908931 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/04eac770-8ff7-453b-a1da-b028636b909c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908966 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrk8h\" (UniqueName: \"kubernetes.io/projected/9782bce9-dbc2-4734-929d-b6ef25dd752e-kube-api-access-lrk8h\") pod \"kube-storage-version-migrator-operator-b67b599dd-txptg\" (UID: \"9782bce9-dbc2-4734-929d-b6ef25dd752e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-txptg" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.908993 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9782bce9-dbc2-4734-929d-b6ef25dd752e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-txptg\" (UID: \"9782bce9-dbc2-4734-929d-b6ef25dd752e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-txptg" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.909012 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxz9b\" (UniqueName: \"kubernetes.io/projected/04eac770-8ff7-453b-a1da-b028636b909c-kube-api-access-wxz9b\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.909032 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbbcf\" (UniqueName: \"kubernetes.io/projected/beb2ca1a-f741-4cd2-8ae8-2a61972cd841-kube-api-access-sbbcf\") pod \"catalog-operator-68c6474976-g8bbs\" (UID: \"beb2ca1a-f741-4cd2-8ae8-2a61972cd841\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8bbs" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.909049 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7bd339a8-f5bb-4f7f-9d9d-e57deef990b8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mrplz\" (UID: \"7bd339a8-f5bb-4f7f-9d9d-e57deef990b8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mrplz" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.909088 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/beb2ca1a-f741-4cd2-8ae8-2a61972cd841-profile-collector-cert\") pod \"catalog-operator-68c6474976-g8bbs\" (UID: \"beb2ca1a-f741-4cd2-8ae8-2a61972cd841\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8bbs" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.909106 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/04eac770-8ff7-453b-a1da-b028636b909c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.909124 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e9d269d7-f93e-4959-8e00-b541a0f9d9c2-trusted-ca\") pod \"ingress-operator-5b745b69d9-vtzj8\" (UID: \"e9d269d7-f93e-4959-8e00-b541a0f9d9c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vtzj8" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.909197 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85e64cbb-2da9-4b05-b074-fabf16790f49-secret-volume\") pod \"collect-profiles-29497185-4tnfv\" (UID: \"85e64cbb-2da9-4b05-b074-fabf16790f49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.909215 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn4lt\" (UniqueName: \"kubernetes.io/projected/85e64cbb-2da9-4b05-b074-fabf16790f49-kube-api-access-rn4lt\") pod \"collect-profiles-29497185-4tnfv\" (UID: \"85e64cbb-2da9-4b05-b074-fabf16790f49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.909241 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cc0facf8-c192-4df4-bb9b-68f123fd7b21-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lrw5m\" (UID: \"cc0facf8-c192-4df4-bb9b-68f123fd7b21\") " pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.909258 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/04eac770-8ff7-453b-a1da-b028636b909c-registry-tls\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.909278 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7kcq\" (UniqueName: \"kubernetes.io/projected/7bd339a8-f5bb-4f7f-9d9d-e57deef990b8-kube-api-access-l7kcq\") pod \"control-plane-machine-set-operator-78cbb6b69f-mrplz\" (UID: \"7bd339a8-f5bb-4f7f-9d9d-e57deef990b8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mrplz" Jan 31 03:49:09 crc kubenswrapper[4827]: E0131 03:49:09.914798 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:10.414736114 +0000 UTC m=+143.101816563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.917842 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.936762 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-sw4rn" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.949974 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g4946" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.972047 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxgr4" Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.981616 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" event={"ID":"e0e07dc1-cd63-49ad-8a43-7a15027a1e74","Type":"ContainerStarted","Data":"07096993d9b9fb3080ebe91fa7d167cee7f885177a300684c7a8dd054be4b048"} Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.981671 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" event={"ID":"e0e07dc1-cd63-49ad-8a43-7a15027a1e74","Type":"ContainerStarted","Data":"0462e3bffc949a74b407f00363e098dfaae555b61313b0333405f87aaf26a091"} Jan 31 03:49:09 crc kubenswrapper[4827]: I0131 03:49:09.988410 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xxh4j" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.035012 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbd4n" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.035861 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036082 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9782bce9-dbc2-4734-929d-b6ef25dd752e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-txptg\" (UID: \"9782bce9-dbc2-4734-929d-b6ef25dd752e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-txptg" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036110 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plhd9\" (UniqueName: \"kubernetes.io/projected/775f917b-8a39-4e16-93b8-b285000c2758-kube-api-access-plhd9\") pod \"packageserver-d55dfcdfc-m7n8p\" (UID: \"775f917b-8a39-4e16-93b8-b285000c2758\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036154 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxz9b\" (UniqueName: \"kubernetes.io/projected/04eac770-8ff7-453b-a1da-b028636b909c-kube-api-access-wxz9b\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036175 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbbcf\" (UniqueName: \"kubernetes.io/projected/beb2ca1a-f741-4cd2-8ae8-2a61972cd841-kube-api-access-sbbcf\") pod \"catalog-operator-68c6474976-g8bbs\" (UID: \"beb2ca1a-f741-4cd2-8ae8-2a61972cd841\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8bbs" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036192 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7bd339a8-f5bb-4f7f-9d9d-e57deef990b8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mrplz\" (UID: \"7bd339a8-f5bb-4f7f-9d9d-e57deef990b8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mrplz" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036241 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/775f917b-8a39-4e16-93b8-b285000c2758-webhook-cert\") pod \"packageserver-d55dfcdfc-m7n8p\" (UID: \"775f917b-8a39-4e16-93b8-b285000c2758\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036261 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/beb2ca1a-f741-4cd2-8ae8-2a61972cd841-profile-collector-cert\") pod \"catalog-operator-68c6474976-g8bbs\" (UID: \"beb2ca1a-f741-4cd2-8ae8-2a61972cd841\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8bbs" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036289 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/04eac770-8ff7-453b-a1da-b028636b909c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036331 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e9d269d7-f93e-4959-8e00-b541a0f9d9c2-trusted-ca\") pod \"ingress-operator-5b745b69d9-vtzj8\" (UID: \"e9d269d7-f93e-4959-8e00-b541a0f9d9c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vtzj8" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036349 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9cf25edb-a1c1-4077-97dc-8f87363a8d4e-csi-data-dir\") pod \"csi-hostpathplugin-7h44q\" (UID: \"9cf25edb-a1c1-4077-97dc-8f87363a8d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h44q" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036367 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82nm4\" (UniqueName: \"kubernetes.io/projected/9cf25edb-a1c1-4077-97dc-8f87363a8d4e-kube-api-access-82nm4\") pod \"csi-hostpathplugin-7h44q\" (UID: \"9cf25edb-a1c1-4077-97dc-8f87363a8d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h44q" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036402 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/778d0bb3-9a32-4ff4-8d55-554b42e2a847-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rdh9d\" (UID: \"778d0bb3-9a32-4ff4-8d55-554b42e2a847\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdh9d" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036433 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85e64cbb-2da9-4b05-b074-fabf16790f49-secret-volume\") pod \"collect-profiles-29497185-4tnfv\" (UID: \"85e64cbb-2da9-4b05-b074-fabf16790f49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036452 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn4lt\" (UniqueName: \"kubernetes.io/projected/85e64cbb-2da9-4b05-b074-fabf16790f49-kube-api-access-rn4lt\") pod \"collect-profiles-29497185-4tnfv\" (UID: \"85e64cbb-2da9-4b05-b074-fabf16790f49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036487 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cc0facf8-c192-4df4-bb9b-68f123fd7b21-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lrw5m\" (UID: \"cc0facf8-c192-4df4-bb9b-68f123fd7b21\") " pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036508 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7kcq\" (UniqueName: \"kubernetes.io/projected/7bd339a8-f5bb-4f7f-9d9d-e57deef990b8-kube-api-access-l7kcq\") pod \"control-plane-machine-set-operator-78cbb6b69f-mrplz\" (UID: \"7bd339a8-f5bb-4f7f-9d9d-e57deef990b8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mrplz" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036524 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/04eac770-8ff7-453b-a1da-b028636b909c-registry-tls\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036552 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9cf25edb-a1c1-4077-97dc-8f87363a8d4e-registration-dir\") pod \"csi-hostpathplugin-7h44q\" (UID: \"9cf25edb-a1c1-4077-97dc-8f87363a8d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h44q" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036569 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aaf5e041-e3bb-423e-a413-c5f999e04bda-cert\") pod \"ingress-canary-t79lc\" (UID: \"aaf5e041-e3bb-423e-a413-c5f999e04bda\") " pod="openshift-ingress-canary/ingress-canary-t79lc" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036608 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76bww\" (UniqueName: \"kubernetes.io/projected/cc0facf8-c192-4df4-bb9b-68f123fd7b21-kube-api-access-76bww\") pod \"marketplace-operator-79b997595-lrw5m\" (UID: \"cc0facf8-c192-4df4-bb9b-68f123fd7b21\") " pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036672 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05b0b4af-2e92-4831-a30e-cb951f41155c-config-volume\") pod \"dns-default-zhrf7\" (UID: \"05b0b4af-2e92-4831-a30e-cb951f41155c\") " pod="openshift-dns/dns-default-zhrf7" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036742 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wns8g\" (UniqueName: \"kubernetes.io/projected/dd044aa4-c4ad-4b0c-ae2e-c7c59dbe7df3-kube-api-access-wns8g\") pod \"machine-config-server-wlxzs\" (UID: \"dd044aa4-c4ad-4b0c-ae2e-c7c59dbe7df3\") " pod="openshift-machine-config-operator/machine-config-server-wlxzs" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036772 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04eac770-8ff7-453b-a1da-b028636b909c-bound-sa-token\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036810 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e9d269d7-f93e-4959-8e00-b541a0f9d9c2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vtzj8\" (UID: \"e9d269d7-f93e-4959-8e00-b541a0f9d9c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vtzj8" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.036859 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8q46\" (UniqueName: \"kubernetes.io/projected/6cf6cb35-7f31-44f5-ba34-be81eb109101-kube-api-access-w8q46\") pod \"package-server-manager-789f6589d5-tbsqf\" (UID: \"6cf6cb35-7f31-44f5-ba34-be81eb109101\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbsqf" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.037181 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9782bce9-dbc2-4734-929d-b6ef25dd752e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-txptg\" (UID: \"9782bce9-dbc2-4734-929d-b6ef25dd752e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-txptg" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.037207 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49856c82-939f-4f25-8b1f-60029c62d43c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6qz7r\" (UID: \"49856c82-939f-4f25-8b1f-60029c62d43c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6qz7r" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.037244 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04eac770-8ff7-453b-a1da-b028636b909c-trusted-ca\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.037274 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a1eac826-76c5-435a-abd1-16f7fe35350f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4rp2t\" (UID: \"a1eac826-76c5-435a-abd1-16f7fe35350f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4rp2t" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.037310 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhnj4\" (UniqueName: \"kubernetes.io/projected/63af2706-4f52-4d40-941f-7575394bfefa-kube-api-access-rhnj4\") pod \"migrator-59844c95c7-xrt7q\" (UID: \"63af2706-4f52-4d40-941f-7575394bfefa\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xrt7q" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.037380 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49856c82-939f-4f25-8b1f-60029c62d43c-config\") pod \"kube-apiserver-operator-766d6c64bb-6qz7r\" (UID: \"49856c82-939f-4f25-8b1f-60029c62d43c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6qz7r" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.037406 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/480bfdcf-e687-4ecc-b22f-545d3d2a41ac-signing-cabundle\") pod \"service-ca-9c57cc56f-fdwxr\" (UID: \"480bfdcf-e687-4ecc-b22f-545d3d2a41ac\") " pod="openshift-service-ca/service-ca-9c57cc56f-fdwxr" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.037461 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/778d0bb3-9a32-4ff4-8d55-554b42e2a847-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rdh9d\" (UID: \"778d0bb3-9a32-4ff4-8d55-554b42e2a847\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdh9d" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.037515 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/480bfdcf-e687-4ecc-b22f-545d3d2a41ac-signing-key\") pod \"service-ca-9c57cc56f-fdwxr\" (UID: \"480bfdcf-e687-4ecc-b22f-545d3d2a41ac\") " pod="openshift-service-ca/service-ca-9c57cc56f-fdwxr" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.037532 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44hhm\" (UniqueName: \"kubernetes.io/projected/778d0bb3-9a32-4ff4-8d55-554b42e2a847-kube-api-access-44hhm\") pod \"cluster-image-registry-operator-dc59b4c8b-rdh9d\" (UID: \"778d0bb3-9a32-4ff4-8d55-554b42e2a847\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdh9d" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.037561 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85e64cbb-2da9-4b05-b074-fabf16790f49-config-volume\") pod \"collect-profiles-29497185-4tnfv\" (UID: \"85e64cbb-2da9-4b05-b074-fabf16790f49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.037580 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/beb2ca1a-f741-4cd2-8ae8-2a61972cd841-srv-cert\") pod \"catalog-operator-68c6474976-g8bbs\" (UID: \"beb2ca1a-f741-4cd2-8ae8-2a61972cd841\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8bbs" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.037610 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bwrl\" (UniqueName: \"kubernetes.io/projected/480bfdcf-e687-4ecc-b22f-545d3d2a41ac-kube-api-access-7bwrl\") pod \"service-ca-9c57cc56f-fdwxr\" (UID: \"480bfdcf-e687-4ecc-b22f-545d3d2a41ac\") " pod="openshift-service-ca/service-ca-9c57cc56f-fdwxr" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.037647 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/dd044aa4-c4ad-4b0c-ae2e-c7c59dbe7df3-certs\") pod \"machine-config-server-wlxzs\" (UID: \"dd044aa4-c4ad-4b0c-ae2e-c7c59dbe7df3\") " pod="openshift-machine-config-operator/machine-config-server-wlxzs" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.037675 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cf6cb35-7f31-44f5-ba34-be81eb109101-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tbsqf\" (UID: \"6cf6cb35-7f31-44f5-ba34-be81eb109101\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbsqf" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.037703 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc0facf8-c192-4df4-bb9b-68f123fd7b21-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lrw5m\" (UID: \"cc0facf8-c192-4df4-bb9b-68f123fd7b21\") " pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.037731 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a1eac826-76c5-435a-abd1-16f7fe35350f-proxy-tls\") pod \"machine-config-operator-74547568cd-4rp2t\" (UID: \"a1eac826-76c5-435a-abd1-16f7fe35350f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4rp2t" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.037775 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z69df\" (UniqueName: \"kubernetes.io/projected/a1eac826-76c5-435a-abd1-16f7fe35350f-kube-api-access-z69df\") pod \"machine-config-operator-74547568cd-4rp2t\" (UID: \"a1eac826-76c5-435a-abd1-16f7fe35350f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4rp2t" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.037865 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz6fv\" (UniqueName: \"kubernetes.io/projected/aaf5e041-e3bb-423e-a413-c5f999e04bda-kube-api-access-sz6fv\") pod \"ingress-canary-t79lc\" (UID: \"aaf5e041-e3bb-423e-a413-c5f999e04bda\") " pod="openshift-ingress-canary/ingress-canary-t79lc" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.037955 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjncg\" (UniqueName: \"kubernetes.io/projected/e9d269d7-f93e-4959-8e00-b541a0f9d9c2-kube-api-access-kjncg\") pod \"ingress-operator-5b745b69d9-vtzj8\" (UID: \"e9d269d7-f93e-4959-8e00-b541a0f9d9c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vtzj8" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.037976 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9cf25edb-a1c1-4077-97dc-8f87363a8d4e-mountpoint-dir\") pod \"csi-hostpathplugin-7h44q\" (UID: \"9cf25edb-a1c1-4077-97dc-8f87363a8d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h44q" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.038025 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/04eac770-8ff7-453b-a1da-b028636b909c-registry-certificates\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.038109 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a1eac826-76c5-435a-abd1-16f7fe35350f-images\") pod \"machine-config-operator-74547568cd-4rp2t\" (UID: \"a1eac826-76c5-435a-abd1-16f7fe35350f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4rp2t" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.038176 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/775f917b-8a39-4e16-93b8-b285000c2758-tmpfs\") pod \"packageserver-d55dfcdfc-m7n8p\" (UID: \"775f917b-8a39-4e16-93b8-b285000c2758\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.038196 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/dd044aa4-c4ad-4b0c-ae2e-c7c59dbe7df3-node-bootstrap-token\") pod \"machine-config-server-wlxzs\" (UID: \"dd044aa4-c4ad-4b0c-ae2e-c7c59dbe7df3\") " pod="openshift-machine-config-operator/machine-config-server-wlxzs" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.038222 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9cf25edb-a1c1-4077-97dc-8f87363a8d4e-socket-dir\") pod \"csi-hostpathplugin-7h44q\" (UID: \"9cf25edb-a1c1-4077-97dc-8f87363a8d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h44q" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.038239 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/778d0bb3-9a32-4ff4-8d55-554b42e2a847-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rdh9d\" (UID: \"778d0bb3-9a32-4ff4-8d55-554b42e2a847\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdh9d" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.038258 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e9d269d7-f93e-4959-8e00-b541a0f9d9c2-metrics-tls\") pod \"ingress-operator-5b745b69d9-vtzj8\" (UID: \"e9d269d7-f93e-4959-8e00-b541a0f9d9c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vtzj8" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.038275 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcqk6\" (UniqueName: \"kubernetes.io/projected/05b0b4af-2e92-4831-a30e-cb951f41155c-kube-api-access-rcqk6\") pod \"dns-default-zhrf7\" (UID: \"05b0b4af-2e92-4831-a30e-cb951f41155c\") " pod="openshift-dns/dns-default-zhrf7" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.038316 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/05b0b4af-2e92-4831-a30e-cb951f41155c-metrics-tls\") pod \"dns-default-zhrf7\" (UID: \"05b0b4af-2e92-4831-a30e-cb951f41155c\") " pod="openshift-dns/dns-default-zhrf7" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.038365 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49856c82-939f-4f25-8b1f-60029c62d43c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6qz7r\" (UID: \"49856c82-939f-4f25-8b1f-60029c62d43c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6qz7r" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.038383 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/04eac770-8ff7-453b-a1da-b028636b909c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.038424 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/775f917b-8a39-4e16-93b8-b285000c2758-apiservice-cert\") pod \"packageserver-d55dfcdfc-m7n8p\" (UID: \"775f917b-8a39-4e16-93b8-b285000c2758\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.038440 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9cf25edb-a1c1-4077-97dc-8f87363a8d4e-plugins-dir\") pod \"csi-hostpathplugin-7h44q\" (UID: \"9cf25edb-a1c1-4077-97dc-8f87363a8d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h44q" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.038462 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrk8h\" (UniqueName: \"kubernetes.io/projected/9782bce9-dbc2-4734-929d-b6ef25dd752e-kube-api-access-lrk8h\") pod \"kube-storage-version-migrator-operator-b67b599dd-txptg\" (UID: \"9782bce9-dbc2-4734-929d-b6ef25dd752e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-txptg" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.041700 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc0facf8-c192-4df4-bb9b-68f123fd7b21-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lrw5m\" (UID: \"cc0facf8-c192-4df4-bb9b-68f123fd7b21\") " pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" Jan 31 03:49:10 crc kubenswrapper[4827]: E0131 03:49:10.044836 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:10.544803318 +0000 UTC m=+143.231883767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.046628 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/480bfdcf-e687-4ecc-b22f-545d3d2a41ac-signing-cabundle\") pod \"service-ca-9c57cc56f-fdwxr\" (UID: \"480bfdcf-e687-4ecc-b22f-545d3d2a41ac\") " pod="openshift-service-ca/service-ca-9c57cc56f-fdwxr" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.047181 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9782bce9-dbc2-4734-929d-b6ef25dd752e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-txptg\" (UID: \"9782bce9-dbc2-4734-929d-b6ef25dd752e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-txptg" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.048439 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a1eac826-76c5-435a-abd1-16f7fe35350f-images\") pod \"machine-config-operator-74547568cd-4rp2t\" (UID: \"a1eac826-76c5-435a-abd1-16f7fe35350f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4rp2t" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.048654 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85e64cbb-2da9-4b05-b074-fabf16790f49-config-volume\") pod \"collect-profiles-29497185-4tnfv\" (UID: \"85e64cbb-2da9-4b05-b074-fabf16790f49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.058149 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/04eac770-8ff7-453b-a1da-b028636b909c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.058549 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.063758 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/04eac770-8ff7-453b-a1da-b028636b909c-registry-certificates\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.064820 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a1eac826-76c5-435a-abd1-16f7fe35350f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4rp2t\" (UID: \"a1eac826-76c5-435a-abd1-16f7fe35350f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4rp2t" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.066603 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04eac770-8ff7-453b-a1da-b028636b909c-trusted-ca\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.068557 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqndk" event={"ID":"4a32abae-914d-4102-9e89-817922ff06ca","Type":"ContainerStarted","Data":"ba07adfcba0a71c184175c827f3ea62b122e84235f83800de45097a62669d22a"} Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.070515 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e9d269d7-f93e-4959-8e00-b541a0f9d9c2-metrics-tls\") pod \"ingress-operator-5b745b69d9-vtzj8\" (UID: \"e9d269d7-f93e-4959-8e00-b541a0f9d9c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vtzj8" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.079211 4827 patch_prober.go:28] interesting pod/downloads-7954f5f757-sgzkm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.079504 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sgzkm" podUID="6e659a59-19ab-4c91-98ec-db3042ac1d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.079967 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e9d269d7-f93e-4959-8e00-b541a0f9d9c2-trusted-ca\") pod \"ingress-operator-5b745b69d9-vtzj8\" (UID: \"e9d269d7-f93e-4959-8e00-b541a0f9d9c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vtzj8" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.093736 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9782bce9-dbc2-4734-929d-b6ef25dd752e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-txptg\" (UID: \"9782bce9-dbc2-4734-929d-b6ef25dd752e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-txptg" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.097052 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7bd339a8-f5bb-4f7f-9d9d-e57deef990b8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mrplz\" (UID: \"7bd339a8-f5bb-4f7f-9d9d-e57deef990b8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mrplz" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.098957 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/04eac770-8ff7-453b-a1da-b028636b909c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.099645 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/beb2ca1a-f741-4cd2-8ae8-2a61972cd841-profile-collector-cert\") pod \"catalog-operator-68c6474976-g8bbs\" (UID: \"beb2ca1a-f741-4cd2-8ae8-2a61972cd841\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8bbs" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.099681 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/04eac770-8ff7-453b-a1da-b028636b909c-registry-tls\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.101450 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/480bfdcf-e687-4ecc-b22f-545d3d2a41ac-signing-key\") pod \"service-ca-9c57cc56f-fdwxr\" (UID: \"480bfdcf-e687-4ecc-b22f-545d3d2a41ac\") " pod="openshift-service-ca/service-ca-9c57cc56f-fdwxr" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.102598 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85e64cbb-2da9-4b05-b074-fabf16790f49-secret-volume\") pod \"collect-profiles-29497185-4tnfv\" (UID: \"85e64cbb-2da9-4b05-b074-fabf16790f49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.102629 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a1eac826-76c5-435a-abd1-16f7fe35350f-proxy-tls\") pod \"machine-config-operator-74547568cd-4rp2t\" (UID: \"a1eac826-76c5-435a-abd1-16f7fe35350f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4rp2t" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.104835 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/beb2ca1a-f741-4cd2-8ae8-2a61972cd841-srv-cert\") pod \"catalog-operator-68c6474976-g8bbs\" (UID: \"beb2ca1a-f741-4cd2-8ae8-2a61972cd841\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8bbs" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.106707 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrk8h\" (UniqueName: \"kubernetes.io/projected/9782bce9-dbc2-4734-929d-b6ef25dd752e-kube-api-access-lrk8h\") pod \"kube-storage-version-migrator-operator-b67b599dd-txptg\" (UID: \"9782bce9-dbc2-4734-929d-b6ef25dd752e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-txptg" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.108678 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cc0facf8-c192-4df4-bb9b-68f123fd7b21-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lrw5m\" (UID: \"cc0facf8-c192-4df4-bb9b-68f123fd7b21\") " pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.114698 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cf6cb35-7f31-44f5-ba34-be81eb109101-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tbsqf\" (UID: \"6cf6cb35-7f31-44f5-ba34-be81eb109101\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbsqf" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.115961 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxz9b\" (UniqueName: \"kubernetes.io/projected/04eac770-8ff7-453b-a1da-b028636b909c-kube-api-access-wxz9b\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.116276 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbbcf\" (UniqueName: \"kubernetes.io/projected/beb2ca1a-f741-4cd2-8ae8-2a61972cd841-kube-api-access-sbbcf\") pod \"catalog-operator-68c6474976-g8bbs\" (UID: \"beb2ca1a-f741-4cd2-8ae8-2a61972cd841\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8bbs" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.118853 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z69df\" (UniqueName: \"kubernetes.io/projected/a1eac826-76c5-435a-abd1-16f7fe35350f-kube-api-access-z69df\") pod \"machine-config-operator-74547568cd-4rp2t\" (UID: \"a1eac826-76c5-435a-abd1-16f7fe35350f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4rp2t" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.140099 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plhd9\" (UniqueName: \"kubernetes.io/projected/775f917b-8a39-4e16-93b8-b285000c2758-kube-api-access-plhd9\") pod \"packageserver-d55dfcdfc-m7n8p\" (UID: \"775f917b-8a39-4e16-93b8-b285000c2758\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.140214 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/775f917b-8a39-4e16-93b8-b285000c2758-webhook-cert\") pod \"packageserver-d55dfcdfc-m7n8p\" (UID: \"775f917b-8a39-4e16-93b8-b285000c2758\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.140245 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9cf25edb-a1c1-4077-97dc-8f87363a8d4e-csi-data-dir\") pod \"csi-hostpathplugin-7h44q\" (UID: \"9cf25edb-a1c1-4077-97dc-8f87363a8d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h44q" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.140269 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82nm4\" (UniqueName: \"kubernetes.io/projected/9cf25edb-a1c1-4077-97dc-8f87363a8d4e-kube-api-access-82nm4\") pod \"csi-hostpathplugin-7h44q\" (UID: \"9cf25edb-a1c1-4077-97dc-8f87363a8d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h44q" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.140306 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/778d0bb3-9a32-4ff4-8d55-554b42e2a847-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rdh9d\" (UID: \"778d0bb3-9a32-4ff4-8d55-554b42e2a847\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdh9d" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.140349 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aaf5e041-e3bb-423e-a413-c5f999e04bda-cert\") pod \"ingress-canary-t79lc\" (UID: \"aaf5e041-e3bb-423e-a413-c5f999e04bda\") " pod="openshift-ingress-canary/ingress-canary-t79lc" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.140374 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9cf25edb-a1c1-4077-97dc-8f87363a8d4e-registration-dir\") pod \"csi-hostpathplugin-7h44q\" (UID: \"9cf25edb-a1c1-4077-97dc-8f87363a8d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h44q" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.140417 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05b0b4af-2e92-4831-a30e-cb951f41155c-config-volume\") pod \"dns-default-zhrf7\" (UID: \"05b0b4af-2e92-4831-a30e-cb951f41155c\") " pod="openshift-dns/dns-default-zhrf7" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.140451 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wns8g\" (UniqueName: \"kubernetes.io/projected/dd044aa4-c4ad-4b0c-ae2e-c7c59dbe7df3-kube-api-access-wns8g\") pod \"machine-config-server-wlxzs\" (UID: \"dd044aa4-c4ad-4b0c-ae2e-c7c59dbe7df3\") " pod="openshift-machine-config-operator/machine-config-server-wlxzs" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.140528 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49856c82-939f-4f25-8b1f-60029c62d43c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6qz7r\" (UID: \"49856c82-939f-4f25-8b1f-60029c62d43c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6qz7r" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.140572 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49856c82-939f-4f25-8b1f-60029c62d43c-config\") pod \"kube-apiserver-operator-766d6c64bb-6qz7r\" (UID: \"49856c82-939f-4f25-8b1f-60029c62d43c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6qz7r" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.140626 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.140654 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/778d0bb3-9a32-4ff4-8d55-554b42e2a847-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rdh9d\" (UID: \"778d0bb3-9a32-4ff4-8d55-554b42e2a847\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdh9d" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.140677 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44hhm\" (UniqueName: \"kubernetes.io/projected/778d0bb3-9a32-4ff4-8d55-554b42e2a847-kube-api-access-44hhm\") pod \"cluster-image-registry-operator-dc59b4c8b-rdh9d\" (UID: \"778d0bb3-9a32-4ff4-8d55-554b42e2a847\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdh9d" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.140781 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/dd044aa4-c4ad-4b0c-ae2e-c7c59dbe7df3-certs\") pod \"machine-config-server-wlxzs\" (UID: \"dd044aa4-c4ad-4b0c-ae2e-c7c59dbe7df3\") " pod="openshift-machine-config-operator/machine-config-server-wlxzs" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.140916 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz6fv\" (UniqueName: \"kubernetes.io/projected/aaf5e041-e3bb-423e-a413-c5f999e04bda-kube-api-access-sz6fv\") pod \"ingress-canary-t79lc\" (UID: \"aaf5e041-e3bb-423e-a413-c5f999e04bda\") " pod="openshift-ingress-canary/ingress-canary-t79lc" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.140948 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9cf25edb-a1c1-4077-97dc-8f87363a8d4e-mountpoint-dir\") pod \"csi-hostpathplugin-7h44q\" (UID: \"9cf25edb-a1c1-4077-97dc-8f87363a8d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h44q" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.141066 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/775f917b-8a39-4e16-93b8-b285000c2758-tmpfs\") pod \"packageserver-d55dfcdfc-m7n8p\" (UID: \"775f917b-8a39-4e16-93b8-b285000c2758\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.141086 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/dd044aa4-c4ad-4b0c-ae2e-c7c59dbe7df3-node-bootstrap-token\") pod \"machine-config-server-wlxzs\" (UID: \"dd044aa4-c4ad-4b0c-ae2e-c7c59dbe7df3\") " pod="openshift-machine-config-operator/machine-config-server-wlxzs" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.141107 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9cf25edb-a1c1-4077-97dc-8f87363a8d4e-socket-dir\") pod \"csi-hostpathplugin-7h44q\" (UID: \"9cf25edb-a1c1-4077-97dc-8f87363a8d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h44q" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.141161 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/778d0bb3-9a32-4ff4-8d55-554b42e2a847-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rdh9d\" (UID: \"778d0bb3-9a32-4ff4-8d55-554b42e2a847\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdh9d" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.141188 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcqk6\" (UniqueName: \"kubernetes.io/projected/05b0b4af-2e92-4831-a30e-cb951f41155c-kube-api-access-rcqk6\") pod \"dns-default-zhrf7\" (UID: \"05b0b4af-2e92-4831-a30e-cb951f41155c\") " pod="openshift-dns/dns-default-zhrf7" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.141235 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/05b0b4af-2e92-4831-a30e-cb951f41155c-metrics-tls\") pod \"dns-default-zhrf7\" (UID: \"05b0b4af-2e92-4831-a30e-cb951f41155c\") " pod="openshift-dns/dns-default-zhrf7" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.141260 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49856c82-939f-4f25-8b1f-60029c62d43c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6qz7r\" (UID: \"49856c82-939f-4f25-8b1f-60029c62d43c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6qz7r" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.141309 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/775f917b-8a39-4e16-93b8-b285000c2758-apiservice-cert\") pod \"packageserver-d55dfcdfc-m7n8p\" (UID: \"775f917b-8a39-4e16-93b8-b285000c2758\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.141327 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9cf25edb-a1c1-4077-97dc-8f87363a8d4e-plugins-dir\") pod \"csi-hostpathplugin-7h44q\" (UID: \"9cf25edb-a1c1-4077-97dc-8f87363a8d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h44q" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.141779 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9cf25edb-a1c1-4077-97dc-8f87363a8d4e-plugins-dir\") pod \"csi-hostpathplugin-7h44q\" (UID: \"9cf25edb-a1c1-4077-97dc-8f87363a8d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h44q" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.140524 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9cf25edb-a1c1-4077-97dc-8f87363a8d4e-csi-data-dir\") pod \"csi-hostpathplugin-7h44q\" (UID: \"9cf25edb-a1c1-4077-97dc-8f87363a8d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h44q" Jan 31 03:49:10 crc kubenswrapper[4827]: E0131 03:49:10.143064 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:10.643047754 +0000 UTC m=+143.330128203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.143529 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49856c82-939f-4f25-8b1f-60029c62d43c-config\") pod \"kube-apiserver-operator-766d6c64bb-6qz7r\" (UID: \"49856c82-939f-4f25-8b1f-60029c62d43c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6qz7r" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.144305 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05b0b4af-2e92-4831-a30e-cb951f41155c-config-volume\") pod \"dns-default-zhrf7\" (UID: \"05b0b4af-2e92-4831-a30e-cb951f41155c\") " pod="openshift-dns/dns-default-zhrf7" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.144413 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9cf25edb-a1c1-4077-97dc-8f87363a8d4e-registration-dir\") pod \"csi-hostpathplugin-7h44q\" (UID: \"9cf25edb-a1c1-4077-97dc-8f87363a8d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h44q" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.144511 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9cf25edb-a1c1-4077-97dc-8f87363a8d4e-mountpoint-dir\") pod \"csi-hostpathplugin-7h44q\" (UID: \"9cf25edb-a1c1-4077-97dc-8f87363a8d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h44q" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.144928 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/775f917b-8a39-4e16-93b8-b285000c2758-tmpfs\") pod \"packageserver-d55dfcdfc-m7n8p\" (UID: \"775f917b-8a39-4e16-93b8-b285000c2758\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.145268 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9cf25edb-a1c1-4077-97dc-8f87363a8d4e-socket-dir\") pod \"csi-hostpathplugin-7h44q\" (UID: \"9cf25edb-a1c1-4077-97dc-8f87363a8d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h44q" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.152928 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjncg\" (UniqueName: \"kubernetes.io/projected/e9d269d7-f93e-4959-8e00-b541a0f9d9c2-kube-api-access-kjncg\") pod \"ingress-operator-5b745b69d9-vtzj8\" (UID: \"e9d269d7-f93e-4959-8e00-b541a0f9d9c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vtzj8" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.154021 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/778d0bb3-9a32-4ff4-8d55-554b42e2a847-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rdh9d\" (UID: \"778d0bb3-9a32-4ff4-8d55-554b42e2a847\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdh9d" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.154414 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/dd044aa4-c4ad-4b0c-ae2e-c7c59dbe7df3-certs\") pod \"machine-config-server-wlxzs\" (UID: \"dd044aa4-c4ad-4b0c-ae2e-c7c59dbe7df3\") " pod="openshift-machine-config-operator/machine-config-server-wlxzs" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.156510 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49856c82-939f-4f25-8b1f-60029c62d43c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6qz7r\" (UID: \"49856c82-939f-4f25-8b1f-60029c62d43c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6qz7r" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.157406 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/775f917b-8a39-4e16-93b8-b285000c2758-apiservice-cert\") pod \"packageserver-d55dfcdfc-m7n8p\" (UID: \"775f917b-8a39-4e16-93b8-b285000c2758\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.160704 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04eac770-8ff7-453b-a1da-b028636b909c-bound-sa-token\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.160817 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/775f917b-8a39-4e16-93b8-b285000c2758-webhook-cert\") pod \"packageserver-d55dfcdfc-m7n8p\" (UID: \"775f917b-8a39-4e16-93b8-b285000c2758\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.163427 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aaf5e041-e3bb-423e-a413-c5f999e04bda-cert\") pod \"ingress-canary-t79lc\" (UID: \"aaf5e041-e3bb-423e-a413-c5f999e04bda\") " pod="openshift-ingress-canary/ingress-canary-t79lc" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.163579 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/778d0bb3-9a32-4ff4-8d55-554b42e2a847-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rdh9d\" (UID: \"778d0bb3-9a32-4ff4-8d55-554b42e2a847\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdh9d" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.164035 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/dd044aa4-c4ad-4b0c-ae2e-c7c59dbe7df3-node-bootstrap-token\") pod \"machine-config-server-wlxzs\" (UID: \"dd044aa4-c4ad-4b0c-ae2e-c7c59dbe7df3\") " pod="openshift-machine-config-operator/machine-config-server-wlxzs" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.167203 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/05b0b4af-2e92-4831-a30e-cb951f41155c-metrics-tls\") pod \"dns-default-zhrf7\" (UID: \"05b0b4af-2e92-4831-a30e-cb951f41155c\") " pod="openshift-dns/dns-default-zhrf7" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.169201 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qtqj4"] Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.179457 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8q46\" (UniqueName: \"kubernetes.io/projected/6cf6cb35-7f31-44f5-ba34-be81eb109101-kube-api-access-w8q46\") pod \"package-server-manager-789f6589d5-tbsqf\" (UID: \"6cf6cb35-7f31-44f5-ba34-be81eb109101\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbsqf" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.213486 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn4lt\" (UniqueName: \"kubernetes.io/projected/85e64cbb-2da9-4b05-b074-fabf16790f49-kube-api-access-rn4lt\") pod \"collect-profiles-29497185-4tnfv\" (UID: \"85e64cbb-2da9-4b05-b074-fabf16790f49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.230480 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbsqf" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.247131 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:10 crc kubenswrapper[4827]: E0131 03:49:10.247752 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:10.747710594 +0000 UTC m=+143.434791043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.249487 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-57nc5"] Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.250174 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:10 crc kubenswrapper[4827]: E0131 03:49:10.258895 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:10.758859405 +0000 UTC m=+143.445939854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:10 crc kubenswrapper[4827]: W0131 03:49:10.256935 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81c458ce_ffe4_4613_bb4a_0b5d0809519b.slice/crio-6ee07c5047c166a4fcd8fe98ae144e25cc19919b0600107e98ca0274a0c2ac05 WatchSource:0}: Error finding container 6ee07c5047c166a4fcd8fe98ae144e25cc19919b0600107e98ca0274a0c2ac05: Status 404 returned error can't find the container with id 6ee07c5047c166a4fcd8fe98ae144e25cc19919b0600107e98ca0274a0c2ac05 Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.260749 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.265106 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5p4mg"] Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.267770 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-txptg" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.282085 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bwrl\" (UniqueName: \"kubernetes.io/projected/480bfdcf-e687-4ecc-b22f-545d3d2a41ac-kube-api-access-7bwrl\") pod \"service-ca-9c57cc56f-fdwxr\" (UID: \"480bfdcf-e687-4ecc-b22f-545d3d2a41ac\") " pod="openshift-service-ca/service-ca-9c57cc56f-fdwxr" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.282518 4827 csr.go:261] certificate signing request csr-pdt9s is approved, waiting to be issued Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.282539 4827 csr.go:257] certificate signing request csr-pdt9s is issued Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.288692 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76bww\" (UniqueName: \"kubernetes.io/projected/cc0facf8-c192-4df4-bb9b-68f123fd7b21-kube-api-access-76bww\") pod \"marketplace-operator-79b997595-lrw5m\" (UID: \"cc0facf8-c192-4df4-bb9b-68f123fd7b21\") " pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.292494 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8bbs" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.296998 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e9d269d7-f93e-4959-8e00-b541a0f9d9c2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vtzj8\" (UID: \"e9d269d7-f93e-4959-8e00-b541a0f9d9c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vtzj8" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.305014 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vtzj8" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.312183 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4rp2t" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.316537 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhnj4\" (UniqueName: \"kubernetes.io/projected/63af2706-4f52-4d40-941f-7575394bfefa-kube-api-access-rhnj4\") pod \"migrator-59844c95c7-xrt7q\" (UID: \"63af2706-4f52-4d40-941f-7575394bfefa\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xrt7q" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.331485 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mkhcp"] Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.332431 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7kcq\" (UniqueName: \"kubernetes.io/projected/7bd339a8-f5bb-4f7f-9d9d-e57deef990b8-kube-api-access-l7kcq\") pod \"control-plane-machine-set-operator-78cbb6b69f-mrplz\" (UID: \"7bd339a8-f5bb-4f7f-9d9d-e57deef990b8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mrplz" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.356573 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fdwxr" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.367830 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plhd9\" (UniqueName: \"kubernetes.io/projected/775f917b-8a39-4e16-93b8-b285000c2758-kube-api-access-plhd9\") pod \"packageserver-d55dfcdfc-m7n8p\" (UID: \"775f917b-8a39-4e16-93b8-b285000c2758\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.367816 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44hhm\" (UniqueName: \"kubernetes.io/projected/778d0bb3-9a32-4ff4-8d55-554b42e2a847-kube-api-access-44hhm\") pod \"cluster-image-registry-operator-dc59b4c8b-rdh9d\" (UID: \"778d0bb3-9a32-4ff4-8d55-554b42e2a847\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdh9d" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.373857 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mcs5z"] Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.374006 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:10 crc kubenswrapper[4827]: E0131 03:49:10.374379 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:10.874352524 +0000 UTC m=+143.561432983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.374753 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:10 crc kubenswrapper[4827]: E0131 03:49:10.375274 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:10.875264095 +0000 UTC m=+143.562344544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.375410 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.397046 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.402387 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49856c82-939f-4f25-8b1f-60029c62d43c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6qz7r\" (UID: \"49856c82-939f-4f25-8b1f-60029c62d43c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6qz7r" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.418575 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wns8g\" (UniqueName: \"kubernetes.io/projected/dd044aa4-c4ad-4b0c-ae2e-c7c59dbe7df3-kube-api-access-wns8g\") pod \"machine-config-server-wlxzs\" (UID: \"dd044aa4-c4ad-4b0c-ae2e-c7c59dbe7df3\") " pod="openshift-machine-config-operator/machine-config-server-wlxzs" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.423832 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82nm4\" (UniqueName: \"kubernetes.io/projected/9cf25edb-a1c1-4077-97dc-8f87363a8d4e-kube-api-access-82nm4\") pod \"csi-hostpathplugin-7h44q\" (UID: \"9cf25edb-a1c1-4077-97dc-8f87363a8d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h44q" Jan 31 03:49:10 crc kubenswrapper[4827]: W0131 03:49:10.439734 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bafc4cb_e5b7_4b39_9930_b885e403dfca.slice/crio-3fbdacc92bd0342b626ea0ee73d5c977e739940ba6f3932740b40d8f9574529c WatchSource:0}: Error finding container 3fbdacc92bd0342b626ea0ee73d5c977e739940ba6f3932740b40d8f9574529c: Status 404 returned error can't find the container with id 3fbdacc92bd0342b626ea0ee73d5c977e739940ba6f3932740b40d8f9574529c Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.442532 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7h44q" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.451490 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz6fv\" (UniqueName: \"kubernetes.io/projected/aaf5e041-e3bb-423e-a413-c5f999e04bda-kube-api-access-sz6fv\") pod \"ingress-canary-t79lc\" (UID: \"aaf5e041-e3bb-423e-a413-c5f999e04bda\") " pod="openshift-ingress-canary/ingress-canary-t79lc" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.455276 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wlxzs" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.476393 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:10 crc kubenswrapper[4827]: E0131 03:49:10.476551 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:10.976536302 +0000 UTC m=+143.663616741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.476711 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:10 crc kubenswrapper[4827]: E0131 03:49:10.477010 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:10.977003968 +0000 UTC m=+143.664084417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.492790 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcqk6\" (UniqueName: \"kubernetes.io/projected/05b0b4af-2e92-4831-a30e-cb951f41155c-kube-api-access-rcqk6\") pod \"dns-default-zhrf7\" (UID: \"05b0b4af-2e92-4831-a30e-cb951f41155c\") " pod="openshift-dns/dns-default-zhrf7" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.516037 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-q4hqs"] Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.521188 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xrt7q" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.535594 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/778d0bb3-9a32-4ff4-8d55-554b42e2a847-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rdh9d\" (UID: \"778d0bb3-9a32-4ff4-8d55-554b42e2a847\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdh9d" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.569512 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.571739 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm"] Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.591902 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:10 crc kubenswrapper[4827]: E0131 03:49:10.592408 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:11.092388945 +0000 UTC m=+143.779469404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.597003 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mrplz" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.626771 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g4946"] Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.630034 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lx7vs"] Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.666401 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdh9d" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.668161 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-k4wl6"] Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.683864 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xxh4j"] Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.687125 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6qz7r" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.711222 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zhrf7" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.711458 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:10 crc kubenswrapper[4827]: E0131 03:49:10.711783 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:11.211769573 +0000 UTC m=+143.898850022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.712215 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-t79lc" Jan 31 03:49:10 crc kubenswrapper[4827]: W0131 03:49:10.718729 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fc8b4d_5a6b_40a8_acef_785097811718.slice/crio-1679896c95373d4e049c28265ff686b1585fd343ffea104fc27ad20a41a0ebf7 WatchSource:0}: Error finding container 1679896c95373d4e049c28265ff686b1585fd343ffea104fc27ad20a41a0ebf7: Status 404 returned error can't find the container with id 1679896c95373d4e049c28265ff686b1585fd343ffea104fc27ad20a41a0ebf7 Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.789630 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-cxgr4"] Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.813281 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:10 crc kubenswrapper[4827]: E0131 03:49:10.813682 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:11.313648201 +0000 UTC m=+144.000728650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.827848 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" podStartSLOduration=122.827831222 podStartE2EDuration="2m2.827831222s" podCreationTimestamp="2026-01-31 03:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:10.798038682 +0000 UTC m=+143.485119141" watchObservedRunningTime="2026-01-31 03:49:10.827831222 +0000 UTC m=+143.514911671" Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.856936 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-sw4rn"] Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.861894 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbsqf"] Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.915182 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:10 crc kubenswrapper[4827]: E0131 03:49:10.916073 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:11.416023914 +0000 UTC m=+144.103104403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:10 crc kubenswrapper[4827]: I0131 03:49:10.968768 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbd4n"] Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.001912 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8bbs"] Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.016563 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:11 crc kubenswrapper[4827]: E0131 03:49:11.017067 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:11.517024293 +0000 UTC m=+144.204104752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.017177 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:11 crc kubenswrapper[4827]: E0131 03:49:11.017634 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:11.517621633 +0000 UTC m=+144.204702082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.094173 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vtzj8"] Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.119704 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:11 crc kubenswrapper[4827]: E0131 03:49:11.120224 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:11.620194433 +0000 UTC m=+144.307274882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.120388 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:11 crc kubenswrapper[4827]: E0131 03:49:11.120719 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:11.62070653 +0000 UTC m=+144.307786979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.120890 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mcs5z" event={"ID":"755d86cd-4d90-41eb-8c62-b130143346aa","Type":"ContainerStarted","Data":"0a325130adc152b307fb8579c88873de916ea33e2b5cd4fe1872ca4dd7508c2a"} Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.125450 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dxfqc" event={"ID":"85b31f82-0a7a-466c-aa0f-bffb46f2b04c","Type":"ContainerStarted","Data":"37e90c1177e3e285f190bdb3e92ece6538f774b277107feac5b4b6c60066310a"} Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.125516 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dxfqc" event={"ID":"85b31f82-0a7a-466c-aa0f-bffb46f2b04c","Type":"ContainerStarted","Data":"e4ef8ed2d13fd44baee495ca8581bbe454166d09b5bbac27faeb7a2f5441d988"} Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.137107 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-57nc5" event={"ID":"81c458ce-ffe4-4613-bb4a-0b5d0809519b","Type":"ContainerStarted","Data":"6ee07c5047c166a4fcd8fe98ae144e25cc19919b0600107e98ca0274a0c2ac05"} Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.194462 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-q4hqs" event={"ID":"a2a52a00-75ce-4094-bab7-913d6fbab1dc","Type":"ContainerStarted","Data":"6473b08890af801202ef6d128d62500bf5cf49e294b26fb8f3a068b47f11ac35"} Jan 31 03:49:11 crc kubenswrapper[4827]: W0131 03:49:11.202287 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9d269d7_f93e_4959_8e00_b541a0f9d9c2.slice/crio-a29d70c240e87a75f504bd172077a9c4491bfe9a7cb81cdcf27bc5ee7f9dd2f2 WatchSource:0}: Error finding container a29d70c240e87a75f504bd172077a9c4491bfe9a7cb81cdcf27bc5ee7f9dd2f2: Status 404 returned error can't find the container with id a29d70c240e87a75f504bd172077a9c4491bfe9a7cb81cdcf27bc5ee7f9dd2f2 Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.209246 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-txptg"] Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.213192 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g4946" event={"ID":"8fe7a4ad-d825-4988-8390-c04b5a1b114c","Type":"ContainerStarted","Data":"b4a87ee47cfff9533a7b7a65daa508339ad7a08415be9faf54d54eef87a6387c"} Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.221041 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:11 crc kubenswrapper[4827]: E0131 03:49:11.221802 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:11.721784271 +0000 UTC m=+144.408864720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.222687 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:11 crc kubenswrapper[4827]: E0131 03:49:11.224302 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:11.724292764 +0000 UTC m=+144.411373213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.225619 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-k4wl6" event={"ID":"cfea954e-db56-4946-a178-3376d7793b46","Type":"ContainerStarted","Data":"77cec65a38f2617e690dbf5f2bf251e31f2426be070d8d540cd1a4a35a5f9810"} Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.235712 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-z6phb" podStartSLOduration=123.235689723 podStartE2EDuration="2m3.235689723s" podCreationTimestamp="2026-01-31 03:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:11.23078664 +0000 UTC m=+143.917867109" watchObservedRunningTime="2026-01-31 03:49:11.235689723 +0000 UTC m=+143.922770192" Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.245733 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" event={"ID":"7bafc4cb-e5b7-4b39-9930-b885e403dfca","Type":"ContainerStarted","Data":"3fbdacc92bd0342b626ea0ee73d5c977e739940ba6f3932740b40d8f9574529c"} Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.246282 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.259456 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xxh4j" event={"ID":"b13bebc0-a453-40cd-9611-43cf66b3dd53","Type":"ContainerStarted","Data":"60d3a2f3ab03de4e22c605d67b387933edea3452b6970605655c21030a65d431"} Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.266961 4827 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mkhcp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.272858 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" podUID="7bafc4cb-e5b7-4b39-9930-b885e403dfca" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.294143 4827 generic.go:334] "Generic (PLEG): container finished" podID="4a32abae-914d-4102-9e89-817922ff06ca" containerID="ad4d343eacaf7fb01cdaf7c2bed1e39b4810385107233c2a7bf969cad588c758" exitCode=0 Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.294252 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqndk" event={"ID":"4a32abae-914d-4102-9e89-817922ff06ca","Type":"ContainerDied","Data":"ad4d343eacaf7fb01cdaf7c2bed1e39b4810385107233c2a7bf969cad588c758"} Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.297619 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-31 03:44:10 +0000 UTC, rotation deadline is 2026-12-11 21:32:37.12579512 +0000 UTC Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.297646 4827 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7553h43m25.828151427s for next certificate rotation Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.299087 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" event={"ID":"48fc8b4d-5a6b-40a8-acef-785097811718","Type":"ContainerStarted","Data":"1679896c95373d4e049c28265ff686b1585fd343ffea104fc27ad20a41a0ebf7"} Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.299869 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" event={"ID":"bc4e2378-18bc-4624-acb0-a5010db62008","Type":"ContainerStarted","Data":"127ad959b6cc8426fe37cc16d34ef4868f71fc8edeb39c9ea003002e8cd4e523"} Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.301181 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxgr4" event={"ID":"0591695e-7fc9-4d9c-a9a2-a5f44db74caa","Type":"ContainerStarted","Data":"e087bef46fe6b88d2b3245bd7ea182d44782e9ebf335dd2cbb222b355d1bd7be"} Jan 31 03:49:11 crc kubenswrapper[4827]: W0131 03:49:11.301244 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9782bce9_dbc2_4734_929d_b6ef25dd752e.slice/crio-6eb720e38eacf8f43059031acf11c7d4725db4e14f09e25e41c96f6c04d3dfdc WatchSource:0}: Error finding container 6eb720e38eacf8f43059031acf11c7d4725db4e14f09e25e41c96f6c04d3dfdc: Status 404 returned error can't find the container with id 6eb720e38eacf8f43059031acf11c7d4725db4e14f09e25e41c96f6c04d3dfdc Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.304499 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5p4mg" event={"ID":"68610615-718e-4fd3-a19b-9de01ef64a03","Type":"ContainerStarted","Data":"90abe98007a95737257c972ccf21ff51b412de50059e9ea3c16fd0ae4a6040cd"} Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.320987 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" event={"ID":"c10be0b3-7f40-4f17-8206-ab6257d4b23b","Type":"ContainerStarted","Data":"7e7aaf694058c3d4697231686fea58a88f974be85b42c9d4c4d92c21a7da3f79"} Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.321063 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" event={"ID":"c10be0b3-7f40-4f17-8206-ab6257d4b23b","Type":"ContainerStarted","Data":"3376d9d9bcff952f3cad22fc14f566be4dd5ab2fdb4ea1c6c02f72a3294321f1"} Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.322214 4827 patch_prober.go:28] interesting pod/downloads-7954f5f757-sgzkm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.322285 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sgzkm" podUID="6e659a59-19ab-4c91-98ec-db3042ac1d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.323680 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:11 crc kubenswrapper[4827]: E0131 03:49:11.324276 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:11.824249257 +0000 UTC m=+144.511329736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.443498 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:11 crc kubenswrapper[4827]: E0131 03:49:11.446840 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:11.946822622 +0000 UTC m=+144.633903161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.470413 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fm67q" podStartSLOduration=123.470332114 podStartE2EDuration="2m3.470332114s" podCreationTimestamp="2026-01-31 03:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:11.465753942 +0000 UTC m=+144.152834391" watchObservedRunningTime="2026-01-31 03:49:11.470332114 +0000 UTC m=+144.157412563" Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.532148 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7h44q"] Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.546017 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:11 crc kubenswrapper[4827]: E0131 03:49:11.546606 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:12.04658789 +0000 UTC m=+144.733668339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.602064 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p"] Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.604224 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xrt7q"] Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.615871 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv"] Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.626097 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4fq" podStartSLOduration=123.626080002 podStartE2EDuration="2m3.626080002s" podCreationTimestamp="2026-01-31 03:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:11.624283433 +0000 UTC m=+144.311363882" watchObservedRunningTime="2026-01-31 03:49:11.626080002 +0000 UTC m=+144.313160451" Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.635491 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fdwxr"] Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.647305 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:11 crc kubenswrapper[4827]: E0131 03:49:11.647655 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:12.14764166 +0000 UTC m=+144.834722109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.686145 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4rp2t"] Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.693276 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-t79lc"] Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.757976 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mrplz"] Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.777702 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.791713 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lrw5m"] Jan 31 03:49:11 crc kubenswrapper[4827]: E0131 03:49:11.793697 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:12.290251121 +0000 UTC m=+144.977331570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.796099 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:11 crc kubenswrapper[4827]: E0131 03:49:11.796618 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:12.296603782 +0000 UTC m=+144.983684231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.817852 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-sgzkm" podStartSLOduration=123.817832558 podStartE2EDuration="2m3.817832558s" podCreationTimestamp="2026-01-31 03:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:11.815919145 +0000 UTC m=+144.502999604" watchObservedRunningTime="2026-01-31 03:49:11.817832558 +0000 UTC m=+144.504913007" Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.856629 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s4zhc" podStartSLOduration=123.856609187 podStartE2EDuration="2m3.856609187s" podCreationTimestamp="2026-01-31 03:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:11.856223014 +0000 UTC m=+144.543303483" watchObservedRunningTime="2026-01-31 03:49:11.856609187 +0000 UTC m=+144.543689636" Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.888417 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zhrf7"] Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.891159 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6qz7r"] Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.891783 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdh9d"] Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.897428 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:11 crc kubenswrapper[4827]: E0131 03:49:11.897806 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:12.397792756 +0000 UTC m=+145.084873205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.897824 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" podStartSLOduration=122.897805367 podStartE2EDuration="2m2.897805367s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:11.89580458 +0000 UTC m=+144.582885029" watchObservedRunningTime="2026-01-31 03:49:11.897805367 +0000 UTC m=+144.584885816" Jan 31 03:49:11 crc kubenswrapper[4827]: W0131 03:49:11.899108 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cf25edb_a1c1_4077_97dc_8f87363a8d4e.slice/crio-dc3a60cb595733914c771da81cebf379ec8d704fea8e6ea533cc6a2ed0d57dd1 WatchSource:0}: Error finding container dc3a60cb595733914c771da81cebf379ec8d704fea8e6ea533cc6a2ed0d57dd1: Status 404 returned error can't find the container with id dc3a60cb595733914c771da81cebf379ec8d704fea8e6ea533cc6a2ed0d57dd1 Jan 31 03:49:11 crc kubenswrapper[4827]: W0131 03:49:11.934176 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod775f917b_8a39_4e16_93b8_b285000c2758.slice/crio-d73d9afb3f6b87357129964833f2dfb469d646f66bc4182f12189a91a8bbe67f WatchSource:0}: Error finding container d73d9afb3f6b87357129964833f2dfb469d646f66bc4182f12189a91a8bbe67f: Status 404 returned error can't find the container with id d73d9afb3f6b87357129964833f2dfb469d646f66bc4182f12189a91a8bbe67f Jan 31 03:49:11 crc kubenswrapper[4827]: W0131 03:49:11.956312 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1eac826_76c5_435a_abd1_16f7fe35350f.slice/crio-ee2cd23ec4017cfa60c92c995c9dce5563bc36cf981d164991ef2410d2baeefa WatchSource:0}: Error finding container ee2cd23ec4017cfa60c92c995c9dce5563bc36cf981d164991ef2410d2baeefa: Status 404 returned error can't find the container with id ee2cd23ec4017cfa60c92c995c9dce5563bc36cf981d164991ef2410d2baeefa Jan 31 03:49:11 crc kubenswrapper[4827]: W0131 03:49:11.961007 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod480bfdcf_e687_4ecc_b22f_545d3d2a41ac.slice/crio-3e057469ce7bc5070bc67fb7eda53a9dcf7b25aa5c2aa9b709e549707817e9c1 WatchSource:0}: Error finding container 3e057469ce7bc5070bc67fb7eda53a9dcf7b25aa5c2aa9b709e549707817e9c1: Status 404 returned error can't find the container with id 3e057469ce7bc5070bc67fb7eda53a9dcf7b25aa5c2aa9b709e549707817e9c1 Jan 31 03:49:11 crc kubenswrapper[4827]: W0131 03:49:11.966857 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bd339a8_f5bb_4f7f_9d9d_e57deef990b8.slice/crio-5d2ba935436285b5c67919e1a8f1d72f056631cceff4a77444d991426aed41e3 WatchSource:0}: Error finding container 5d2ba935436285b5c67919e1a8f1d72f056631cceff4a77444d991426aed41e3: Status 404 returned error can't find the container with id 5d2ba935436285b5c67919e1a8f1d72f056631cceff4a77444d991426aed41e3 Jan 31 03:49:11 crc kubenswrapper[4827]: I0131 03:49:11.998600 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:11 crc kubenswrapper[4827]: E0131 03:49:11.998988 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:12.49897634 +0000 UTC m=+145.186056789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:12 crc kubenswrapper[4827]: W0131 03:49:12.026975 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05b0b4af_2e92_4831_a30e_cb951f41155c.slice/crio-6c0abb81a6558136a6a53438b204c700295f9ef80ec34f5eb10b9b7968782e99 WatchSource:0}: Error finding container 6c0abb81a6558136a6a53438b204c700295f9ef80ec34f5eb10b9b7968782e99: Status 404 returned error can't find the container with id 6c0abb81a6558136a6a53438b204c700295f9ef80ec34f5eb10b9b7968782e99 Jan 31 03:49:12 crc kubenswrapper[4827]: W0131 03:49:12.046033 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod778d0bb3_9a32_4ff4_8d55_554b42e2a847.slice/crio-17a0cddc20427c3489fde3ea3a7f94fd3980ee3a3394173baa00d945a38a7808 WatchSource:0}: Error finding container 17a0cddc20427c3489fde3ea3a7f94fd3980ee3a3394173baa00d945a38a7808: Status 404 returned error can't find the container with id 17a0cddc20427c3489fde3ea3a7f94fd3980ee3a3394173baa00d945a38a7808 Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.077375 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.079779 4827 patch_prober.go:28] interesting pod/router-default-5444994796-dxfqc container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.079842 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxfqc" podUID="85b31f82-0a7a-466c-aa0f-bffb46f2b04c" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.094870 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-h2fxz" podStartSLOduration=123.094848668 podStartE2EDuration="2m3.094848668s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:12.093180732 +0000 UTC m=+144.780261181" watchObservedRunningTime="2026-01-31 03:49:12.094848668 +0000 UTC m=+144.781929127" Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.102238 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:12 crc kubenswrapper[4827]: E0131 03:49:12.102812 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:12.602796362 +0000 UTC m=+145.289876811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.204245 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:12 crc kubenswrapper[4827]: E0131 03:49:12.204540 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:12.704528765 +0000 UTC m=+145.391609214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.304790 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:12 crc kubenswrapper[4827]: E0131 03:49:12.305165 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:12.805150401 +0000 UTC m=+145.492230850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.306282 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" podStartSLOduration=124.306271217 podStartE2EDuration="2m4.306271217s" podCreationTimestamp="2026-01-31 03:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:12.306157793 +0000 UTC m=+144.993238242" watchObservedRunningTime="2026-01-31 03:49:12.306271217 +0000 UTC m=+144.993351666" Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.346172 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv" event={"ID":"85e64cbb-2da9-4b05-b074-fabf16790f49","Type":"ContainerStarted","Data":"f0a1f142d9597893f251cc8f4b2717cd8132bf45bbd1875c6f869eebfc7fd8bf"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.347712 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-dxfqc" podStartSLOduration=123.347700855 podStartE2EDuration="2m3.347700855s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:12.346377801 +0000 UTC m=+145.033458250" watchObservedRunningTime="2026-01-31 03:49:12.347700855 +0000 UTC m=+145.034781304" Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.351026 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vtzj8" event={"ID":"e9d269d7-f93e-4959-8e00-b541a0f9d9c2","Type":"ContainerStarted","Data":"a29d70c240e87a75f504bd172077a9c4491bfe9a7cb81cdcf27bc5ee7f9dd2f2"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.352895 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-57nc5" event={"ID":"81c458ce-ffe4-4613-bb4a-0b5d0809519b","Type":"ContainerStarted","Data":"1f3360b7c80735b34bb0befbfd0b12cb333df2713946fb112e39756c784c1599"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.353105 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-57nc5" Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.353802 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wlxzs" event={"ID":"dd044aa4-c4ad-4b0c-ae2e-c7c59dbe7df3","Type":"ContainerStarted","Data":"62744003915b3ade0ad7b757d072d806e8eaf3c024fdce31d2831659104c1344"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.355827 4827 patch_prober.go:28] interesting pod/console-operator-58897d9998-57nc5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.355935 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-57nc5" podUID="81c458ce-ffe4-4613-bb4a-0b5d0809519b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.356117 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbsqf" event={"ID":"6cf6cb35-7f31-44f5-ba34-be81eb109101","Type":"ContainerStarted","Data":"6507ba25e20b7e68e7dfdb1ed25a4c8fdc2415418ed43e3e40bb143b7d241421"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.367158 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-t79lc" event={"ID":"aaf5e041-e3bb-423e-a413-c5f999e04bda","Type":"ContainerStarted","Data":"a41a9ac7b2c550dc02cf171964b4e4a02ae38f7116d1b0e25bafae40dc9862d9"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.368361 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbd4n" event={"ID":"64cfc229-d23f-4303-a604-cd7be04f0bc3","Type":"ContainerStarted","Data":"0caba5463ac8db38f96533085596e7442d61df079b7b737ef5aa1be866f301c4"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.370202 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdh9d" event={"ID":"778d0bb3-9a32-4ff4-8d55-554b42e2a847","Type":"ContainerStarted","Data":"17a0cddc20427c3489fde3ea3a7f94fd3980ee3a3394173baa00d945a38a7808"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.373374 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h44q" event={"ID":"9cf25edb-a1c1-4077-97dc-8f87363a8d4e","Type":"ContainerStarted","Data":"dc3a60cb595733914c771da81cebf379ec8d704fea8e6ea533cc6a2ed0d57dd1"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.375546 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" event={"ID":"cc0facf8-c192-4df4-bb9b-68f123fd7b21","Type":"ContainerStarted","Data":"63222f319745a9995942369616315f9631733e9e4931e59290885351781d4d95"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.380045 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6qz7r" event={"ID":"49856c82-939f-4f25-8b1f-60029c62d43c","Type":"ContainerStarted","Data":"eb4aaa8284b85d41cf02257b235ff3bedf4392b4778597fa20d83e862cf9a437"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.389376 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xrt7q" event={"ID":"63af2706-4f52-4d40-941f-7575394bfefa","Type":"ContainerStarted","Data":"e6bb30b7a0c2d083d9460f212dfea2e91423b703019b7cf4af58d8aa40c66721"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.393574 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fdwxr" event={"ID":"480bfdcf-e687-4ecc-b22f-545d3d2a41ac","Type":"ContainerStarted","Data":"3e057469ce7bc5070bc67fb7eda53a9dcf7b25aa5c2aa9b709e549707817e9c1"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.395015 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-sw4rn" event={"ID":"0e71f06b-5ae9-4606-922f-eedf9f8eefa6","Type":"ContainerStarted","Data":"14622a312c93fdbe3edf9286253f999195c1137fd17d5d96638749b0f2874a05"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.396967 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" event={"ID":"7bafc4cb-e5b7-4b39-9930-b885e403dfca","Type":"ContainerStarted","Data":"4347637921ce370403b003b8740cb9173f2157c3db9d3514df8976509c34a741"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.398626 4827 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mkhcp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.398675 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" podUID="7bafc4cb-e5b7-4b39-9930-b885e403dfca" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.399405 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5p4mg" event={"ID":"68610615-718e-4fd3-a19b-9de01ef64a03","Type":"ContainerStarted","Data":"2bc51cb92dae27bbac08bfa42d982cd389fdf8e32c3e9e3e313d22785cf56f0d"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.400393 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8bbs" event={"ID":"beb2ca1a-f741-4cd2-8ae8-2a61972cd841","Type":"ContainerStarted","Data":"72611dc4607e0d357a88bbf60b4a8a6193b363071cedb4553d85f38b726ed4de"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.401715 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-q4hqs" event={"ID":"a2a52a00-75ce-4094-bab7-913d6fbab1dc","Type":"ContainerStarted","Data":"0301e9ef087e36a3339397dfb581970f85739dafaf3b34ae619f3b8b63d4cbbf"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.404605 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" event={"ID":"775f917b-8a39-4e16-93b8-b285000c2758","Type":"ContainerStarted","Data":"d73d9afb3f6b87357129964833f2dfb469d646f66bc4182f12189a91a8bbe67f"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.406047 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.406293 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zhrf7" event={"ID":"05b0b4af-2e92-4831-a30e-cb951f41155c","Type":"ContainerStarted","Data":"6c0abb81a6558136a6a53438b204c700295f9ef80ec34f5eb10b9b7968782e99"} Jan 31 03:49:12 crc kubenswrapper[4827]: E0131 03:49:12.407489 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:12.907469502 +0000 UTC m=+145.594550161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.407824 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mrplz" event={"ID":"7bd339a8-f5bb-4f7f-9d9d-e57deef990b8","Type":"ContainerStarted","Data":"5d2ba935436285b5c67919e1a8f1d72f056631cceff4a77444d991426aed41e3"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.408488 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-txptg" event={"ID":"9782bce9-dbc2-4734-929d-b6ef25dd752e","Type":"ContainerStarted","Data":"6eb720e38eacf8f43059031acf11c7d4725db4e14f09e25e41c96f6c04d3dfdc"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.409528 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g4946" event={"ID":"8fe7a4ad-d825-4988-8390-c04b5a1b114c","Type":"ContainerStarted","Data":"5b51e8d56c8d861e66445b5eb729ec8a08d2db48b8b2b8defa03e3b1633afc9f"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.411321 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4rp2t" event={"ID":"a1eac826-76c5-435a-abd1-16f7fe35350f","Type":"ContainerStarted","Data":"ee2cd23ec4017cfa60c92c995c9dce5563bc36cf981d164991ef2410d2baeefa"} Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.413919 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.415343 4827 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-qtqj4 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.415549 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" podUID="c10be0b3-7f40-4f17-8206-ab6257d4b23b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.507376 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:12 crc kubenswrapper[4827]: E0131 03:49:12.507543 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:13.007519098 +0000 UTC m=+145.694599547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.507597 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:12 crc kubenswrapper[4827]: E0131 03:49:12.508062 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:13.008042966 +0000 UTC m=+145.695123425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.540955 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" podStartSLOduration=123.54093644 podStartE2EDuration="2m3.54093644s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:12.538670894 +0000 UTC m=+145.225751363" watchObservedRunningTime="2026-01-31 03:49:12.54093644 +0000 UTC m=+145.228016889" Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.576996 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.577416 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.580148 4827 patch_prober.go:28] interesting pod/apiserver-76f77b778f-x2j9j container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.580242 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" podUID="e0e07dc1-cd63-49ad-8a43-7a15027a1e74" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.610124 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:12 crc kubenswrapper[4827]: E0131 03:49:12.610293 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:13.110265215 +0000 UTC m=+145.797345664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.610406 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:12 crc kubenswrapper[4827]: E0131 03:49:12.610944 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:13.110930777 +0000 UTC m=+145.798011226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.711413 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:12 crc kubenswrapper[4827]: E0131 03:49:12.711605 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:13.211570953 +0000 UTC m=+145.898651402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.711734 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:12 crc kubenswrapper[4827]: E0131 03:49:12.713747 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:13.213732525 +0000 UTC m=+145.900812974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.817567 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:12 crc kubenswrapper[4827]: E0131 03:49:12.817740 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:13.317705242 +0000 UTC m=+146.004785701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.818236 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:12 crc kubenswrapper[4827]: E0131 03:49:12.818558 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:13.318549819 +0000 UTC m=+146.005630268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.864629 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-57nc5" podStartSLOduration=124.864609391 podStartE2EDuration="2m4.864609391s" podCreationTimestamp="2026-01-31 03:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:12.861465647 +0000 UTC m=+145.548546106" watchObservedRunningTime="2026-01-31 03:49:12.864609391 +0000 UTC m=+145.551689850" Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.920064 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:12 crc kubenswrapper[4827]: E0131 03:49:12.920376 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:13.420248241 +0000 UTC m=+146.107328690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:12 crc kubenswrapper[4827]: I0131 03:49:12.922420 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:12 crc kubenswrapper[4827]: E0131 03:49:12.922686 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:13.422673452 +0000 UTC m=+146.109753891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.023535 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:13 crc kubenswrapper[4827]: E0131 03:49:13.023727 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:13.52368257 +0000 UTC m=+146.210763019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.023801 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:13 crc kubenswrapper[4827]: E0131 03:49:13.024057 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:13.524045372 +0000 UTC m=+146.211125821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.083949 4827 patch_prober.go:28] interesting pod/router-default-5444994796-dxfqc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:49:13 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 31 03:49:13 crc kubenswrapper[4827]: [+]process-running ok Jan 31 03:49:13 crc kubenswrapper[4827]: healthz check failed Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.084063 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxfqc" podUID="85b31f82-0a7a-466c-aa0f-bffb46f2b04c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.125078 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:13 crc kubenswrapper[4827]: E0131 03:49:13.125474 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:13.625459314 +0000 UTC m=+146.312539763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.226587 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:13 crc kubenswrapper[4827]: E0131 03:49:13.227261 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:13.727228207 +0000 UTC m=+146.414308696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.327486 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:13 crc kubenswrapper[4827]: E0131 03:49:13.327714 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:13.827672408 +0000 UTC m=+146.514752867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.328851 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:13 crc kubenswrapper[4827]: E0131 03:49:13.329454 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:13.829420845 +0000 UTC m=+146.516501324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.421624 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wlxzs" event={"ID":"dd044aa4-c4ad-4b0c-ae2e-c7c59dbe7df3","Type":"ContainerStarted","Data":"c171b90f18bcaa2744f752d3a5a6ceaba209970ebf87badbfbb0ca1630be9ba5"} Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.425167 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxgr4" event={"ID":"0591695e-7fc9-4d9c-a9a2-a5f44db74caa","Type":"ContainerStarted","Data":"473f8800b86a53df690c1c60ada7632cd0ab7a46d56e4a2b63df6a4a280602b4"} Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.427622 4827 generic.go:334] "Generic (PLEG): container finished" podID="bc4e2378-18bc-4624-acb0-a5010db62008" containerID="c5408a38fae5c07f3e52c756b2d8b9f249acf8850eca1f0f5fa12f157d207419" exitCode=0 Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.427676 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" event={"ID":"bc4e2378-18bc-4624-acb0-a5010db62008","Type":"ContainerDied","Data":"c5408a38fae5c07f3e52c756b2d8b9f249acf8850eca1f0f5fa12f157d207419"} Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.429667 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-k4wl6" event={"ID":"cfea954e-db56-4946-a178-3376d7793b46","Type":"ContainerStarted","Data":"cd033953bd369355e0a6654ec5969066b926ed54bfde97802a39c1e794d23788"} Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.429816 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:13 crc kubenswrapper[4827]: E0131 03:49:13.430108 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:13.930065292 +0000 UTC m=+146.617145751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.430422 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:13 crc kubenswrapper[4827]: E0131 03:49:13.430981 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:13.930964962 +0000 UTC m=+146.618045421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.431463 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mcs5z" event={"ID":"755d86cd-4d90-41eb-8c62-b130143346aa","Type":"ContainerStarted","Data":"0c2c25f49f8d7c65d37a32fa8a49ec215d65aa0c15277f0d2e23272f05dc122f"} Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.433013 4827 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mkhcp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.433042 4827 patch_prober.go:28] interesting pod/console-operator-58897d9998-57nc5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.433108 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-57nc5" podUID="81c458ce-ffe4-4613-bb4a-0b5d0809519b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.433057 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" podUID="7bafc4cb-e5b7-4b39-9930-b885e403dfca" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.433021 4827 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-qtqj4 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.433401 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" podUID="c10be0b3-7f40-4f17-8206-ab6257d4b23b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.454318 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-q4hqs" podStartSLOduration=125.454296537 podStartE2EDuration="2m5.454296537s" podCreationTimestamp="2026-01-31 03:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:13.454130521 +0000 UTC m=+146.141210980" watchObservedRunningTime="2026-01-31 03:49:13.454296537 +0000 UTC m=+146.141376986" Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.532188 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:13 crc kubenswrapper[4827]: E0131 03:49:13.532585 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:14.032532308 +0000 UTC m=+146.719612767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.635279 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:13 crc kubenswrapper[4827]: E0131 03:49:13.635926 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:14.135903965 +0000 UTC m=+146.822984444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.736580 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:13 crc kubenswrapper[4827]: E0131 03:49:13.736712 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:14.236685956 +0000 UTC m=+146.923766405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.737710 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:13 crc kubenswrapper[4827]: E0131 03:49:13.738058 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:14.238044671 +0000 UTC m=+146.925125120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.838954 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:13 crc kubenswrapper[4827]: E0131 03:49:13.839448 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:14.339425822 +0000 UTC m=+147.026506271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:13 crc kubenswrapper[4827]: I0131 03:49:13.940140 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:13 crc kubenswrapper[4827]: E0131 03:49:13.940668 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:14.440645238 +0000 UTC m=+147.127725687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.041623 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:14 crc kubenswrapper[4827]: E0131 03:49:14.042188 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:14.542169013 +0000 UTC m=+147.229249482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.081094 4827 patch_prober.go:28] interesting pod/router-default-5444994796-dxfqc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:49:14 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 31 03:49:14 crc kubenswrapper[4827]: [+]process-running ok Jan 31 03:49:14 crc kubenswrapper[4827]: healthz check failed Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.081162 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxfqc" podUID="85b31f82-0a7a-466c-aa0f-bffb46f2b04c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.143224 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:14 crc kubenswrapper[4827]: E0131 03:49:14.143619 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:14.643601365 +0000 UTC m=+147.330681804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.244665 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:14 crc kubenswrapper[4827]: E0131 03:49:14.244907 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:14.744854402 +0000 UTC m=+147.431934881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.245302 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:14 crc kubenswrapper[4827]: E0131 03:49:14.245667 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:14.745612727 +0000 UTC m=+147.432693176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.346652 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:14 crc kubenswrapper[4827]: E0131 03:49:14.346821 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:14.84679092 +0000 UTC m=+147.533871379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.347308 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:14 crc kubenswrapper[4827]: E0131 03:49:14.347674 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:14.84766308 +0000 UTC m=+147.534743549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.437496 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mrplz" event={"ID":"7bd339a8-f5bb-4f7f-9d9d-e57deef990b8","Type":"ContainerStarted","Data":"b501a5d06af43acd11dc82f8adf003967ea45bd55128776733e58988d1e2ef92"} Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.439016 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbd4n" event={"ID":"64cfc229-d23f-4303-a604-cd7be04f0bc3","Type":"ContainerStarted","Data":"4e1d5d42961b2ee405a16293047a42c1445c208a3fcc8c6e08106509f11a86c2"} Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.440285 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-sw4rn" event={"ID":"0e71f06b-5ae9-4606-922f-eedf9f8eefa6","Type":"ContainerStarted","Data":"d157feb94f82a13bd33a74b1bb4491f891ff4e117c964ff92250af12beb82628"} Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.441725 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" event={"ID":"48fc8b4d-5a6b-40a8-acef-785097811718","Type":"ContainerStarted","Data":"bf8d07e56ab84bd4aac19c46a0573ee554324d72262f9aa09729f26bc3297387"} Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.443114 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vtzj8" event={"ID":"e9d269d7-f93e-4959-8e00-b541a0f9d9c2","Type":"ContainerStarted","Data":"a86ecce67245000ad3b3043add71dbbb752d9c18b88289e3ad5e4a6db5985120"} Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.444547 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-txptg" event={"ID":"9782bce9-dbc2-4734-929d-b6ef25dd752e","Type":"ContainerStarted","Data":"abcc5d8c1006327f07e5ec41c0e67db7da5e5087f596580e114db329901a8882"} Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.445702 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbsqf" event={"ID":"6cf6cb35-7f31-44f5-ba34-be81eb109101","Type":"ContainerStarted","Data":"b4a2671ac3116326adbcc7efaade2c7ec2867ca02a0b12f66db4ae6d053efe9b"} Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.447789 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqndk" event={"ID":"4a32abae-914d-4102-9e89-817922ff06ca","Type":"ContainerStarted","Data":"47fb0d12c54afa320fcbce87302e3c7e1a2cfac845f266a222193876efc05dec"} Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.447998 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:14 crc kubenswrapper[4827]: E0131 03:49:14.448137 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:14.94811356 +0000 UTC m=+147.635194019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.448340 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:14 crc kubenswrapper[4827]: E0131 03:49:14.448635 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:14.948624937 +0000 UTC m=+147.635705396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.449220 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zhrf7" event={"ID":"05b0b4af-2e92-4831-a30e-cb951f41155c","Type":"ContainerStarted","Data":"79e0230cc38aaeadf7708708607ca7fbb3198e64a7a11f13f910a37be8a82c6f"} Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.450408 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4rp2t" event={"ID":"a1eac826-76c5-435a-abd1-16f7fe35350f","Type":"ContainerStarted","Data":"0a79b5a0921afe5c4ad99b0eca42990f24447be7d82615898b742d877dffab05"} Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.451618 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xxh4j" event={"ID":"b13bebc0-a453-40cd-9611-43cf66b3dd53","Type":"ContainerStarted","Data":"2df71f5ce46f6c2e1c91dd389bc3ec82a101615ed5841621a6e5cd199fbe7d2c"} Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.452845 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8bbs" event={"ID":"beb2ca1a-f741-4cd2-8ae8-2a61972cd841","Type":"ContainerStarted","Data":"4d96e2868c931d5728aed1f1fc18ac962f7345e6851d7dc20ade5e06f5840fe1"} Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.454262 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-t79lc" event={"ID":"aaf5e041-e3bb-423e-a413-c5f999e04bda","Type":"ContainerStarted","Data":"183ed45631c9baf5e4e885cdb4eb8ac03eb985d1b3b0d07646738dc7bf0bc7e1"} Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.454539 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g4946" Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.456540 4827 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g4946 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.456577 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g4946" podUID="8fe7a4ad-d825-4988-8390-c04b5a1b114c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.493766 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g4946" podStartSLOduration=125.493747347 podStartE2EDuration="2m5.493747347s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:14.492820236 +0000 UTC m=+147.179900695" watchObservedRunningTime="2026-01-31 03:49:14.493747347 +0000 UTC m=+147.180827796" Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.495861 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-wlxzs" podStartSLOduration=7.495854207 podStartE2EDuration="7.495854207s" podCreationTimestamp="2026-01-31 03:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:14.472874163 +0000 UTC m=+147.159954632" watchObservedRunningTime="2026-01-31 03:49:14.495854207 +0000 UTC m=+147.182934656" Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.549495 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:14 crc kubenswrapper[4827]: E0131 03:49:14.549688 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.049658506 +0000 UTC m=+147.736738955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.550137 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:14 crc kubenswrapper[4827]: E0131 03:49:14.550560 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.050545986 +0000 UTC m=+147.737626435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.651038 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:14 crc kubenswrapper[4827]: E0131 03:49:14.651288 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.151243094 +0000 UTC m=+147.838323583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.651500 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:14 crc kubenswrapper[4827]: E0131 03:49:14.652040 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.152018619 +0000 UTC m=+147.839099108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.752596 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:14 crc kubenswrapper[4827]: E0131 03:49:14.752712 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.252681426 +0000 UTC m=+147.939761875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.753044 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:14 crc kubenswrapper[4827]: E0131 03:49:14.753349 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.253341528 +0000 UTC m=+147.940421977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.854362 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:14 crc kubenswrapper[4827]: E0131 03:49:14.854536 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.354512502 +0000 UTC m=+148.041592951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.854939 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:14 crc kubenswrapper[4827]: E0131 03:49:14.855232 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.355224506 +0000 UTC m=+148.042304955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.956562 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:14 crc kubenswrapper[4827]: E0131 03:49:14.956717 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.456691299 +0000 UTC m=+148.143771749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:14 crc kubenswrapper[4827]: I0131 03:49:14.957051 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:14 crc kubenswrapper[4827]: E0131 03:49:14.957371 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.457357631 +0000 UTC m=+148.144438080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.057947 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:15 crc kubenswrapper[4827]: E0131 03:49:15.058141 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.558115241 +0000 UTC m=+148.245195690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.058185 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:15 crc kubenswrapper[4827]: E0131 03:49:15.058490 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.558483523 +0000 UTC m=+148.245563972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.081351 4827 patch_prober.go:28] interesting pod/router-default-5444994796-dxfqc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:49:15 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 31 03:49:15 crc kubenswrapper[4827]: [+]process-running ok Jan 31 03:49:15 crc kubenswrapper[4827]: healthz check failed Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.081415 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxfqc" podUID="85b31f82-0a7a-466c-aa0f-bffb46f2b04c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.159008 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:15 crc kubenswrapper[4827]: E0131 03:49:15.159470 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.6594535 +0000 UTC m=+148.346533949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.260749 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:15 crc kubenswrapper[4827]: E0131 03:49:15.261128 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.761111691 +0000 UTC m=+148.448192140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.361455 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:15 crc kubenswrapper[4827]: E0131 03:49:15.361742 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.861704315 +0000 UTC m=+148.548784764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.361942 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:15 crc kubenswrapper[4827]: E0131 03:49:15.362609 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.862597835 +0000 UTC m=+148.549678284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.463694 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:15 crc kubenswrapper[4827]: E0131 03:49:15.472235 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:15.972191828 +0000 UTC m=+148.659272287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.475654 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mcs5z" event={"ID":"755d86cd-4d90-41eb-8c62-b130143346aa","Type":"ContainerStarted","Data":"1174f5972fcbe7d032ccc532699582f4f9f31dd5b635dd23ba3f693282ad0675"} Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.485811 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" event={"ID":"775f917b-8a39-4e16-93b8-b285000c2758","Type":"ContainerStarted","Data":"188bcb6d6d320cd28d308aced2ddc62f517117f2d11968206e97ca02e8f0272b"} Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.486543 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.487309 4827 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-m7n8p container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.487409 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" podUID="775f917b-8a39-4e16-93b8-b285000c2758" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.488468 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5p4mg" event={"ID":"68610615-718e-4fd3-a19b-9de01ef64a03","Type":"ContainerStarted","Data":"e1ac04f8acf87638775a7e05f7e5ffa5e2b3f7cbcbc8436e7937cd2962b665eb"} Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.489657 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv" event={"ID":"85e64cbb-2da9-4b05-b074-fabf16790f49","Type":"ContainerStarted","Data":"1305adac46bdd213152462f696bfa7537301d512e0d4472f686e0ec77334c02d"} Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.493043 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxgr4" event={"ID":"0591695e-7fc9-4d9c-a9a2-a5f44db74caa","Type":"ContainerStarted","Data":"717d15e52ff01a683c1cd87434128179ea109fcd56223cc4eea0e73a7d73e766"} Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.513658 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" podStartSLOduration=126.513641457 podStartE2EDuration="2m6.513641457s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:15.513283805 +0000 UTC m=+148.200364254" watchObservedRunningTime="2026-01-31 03:49:15.513641457 +0000 UTC m=+148.200721906" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.520343 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xrt7q" event={"ID":"63af2706-4f52-4d40-941f-7575394bfefa","Type":"ContainerStarted","Data":"3ec3794e86fc991fb86838100feb1befc8c3c77985f56c8a9bd37991f91fc3e3"} Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.538217 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv" podStartSLOduration=127.538195273 podStartE2EDuration="2m7.538195273s" podCreationTimestamp="2026-01-31 03:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:15.537144328 +0000 UTC m=+148.224224777" watchObservedRunningTime="2026-01-31 03:49:15.538195273 +0000 UTC m=+148.225275722" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.554008 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fdwxr" event={"ID":"480bfdcf-e687-4ecc-b22f-545d3d2a41ac","Type":"ContainerStarted","Data":"80d0ab7902b1e631681434823286bc99a450acc6002ee0f658edb29ac5d77482"} Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.559652 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" event={"ID":"cc0facf8-c192-4df4-bb9b-68f123fd7b21","Type":"ContainerStarted","Data":"e293ec72b63f1393cfb46546ee0e94f82827ea624e218e5e1cdf8e73c91516d7"} Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.560648 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.562099 4827 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lrw5m container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.562143 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" podUID="cc0facf8-c192-4df4-bb9b-68f123fd7b21" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.562626 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6qz7r" event={"ID":"49856c82-939f-4f25-8b1f-60029c62d43c","Type":"ContainerStarted","Data":"76b90fdef4b1221bd74be45ba320631fcc6607ebf57e59958b4e97c29b4b1164"} Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.567357 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdh9d" event={"ID":"778d0bb3-9a32-4ff4-8d55-554b42e2a847","Type":"ContainerStarted","Data":"aa79310eeae55fb9018ebebc6eb936f5df25a60f25d8001e233078265dc268ec"} Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.567655 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqndk" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.570568 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8bbs" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.570638 4827 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g4946 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.570689 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g4946" podUID="8fe7a4ad-d825-4988-8390-c04b5a1b114c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.571754 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:15 crc kubenswrapper[4827]: E0131 03:49:15.573014 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:16.07299872 +0000 UTC m=+148.760079169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.574302 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-fdwxr" podStartSLOduration=126.574290773 podStartE2EDuration="2m6.574290773s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:15.569147402 +0000 UTC m=+148.256227851" watchObservedRunningTime="2026-01-31 03:49:15.574290773 +0000 UTC m=+148.261371212" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.587100 4827 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-g8bbs container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.587175 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8bbs" podUID="beb2ca1a-f741-4cd2-8ae8-2a61972cd841" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.602161 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdh9d" podStartSLOduration=126.602143519 podStartE2EDuration="2m6.602143519s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:15.58745121 +0000 UTC m=+148.274531659" watchObservedRunningTime="2026-01-31 03:49:15.602143519 +0000 UTC m=+148.289223968" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.605056 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqndk" podStartSLOduration=127.605050306 podStartE2EDuration="2m7.605050306s" podCreationTimestamp="2026-01-31 03:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:15.601494717 +0000 UTC m=+148.288575176" watchObservedRunningTime="2026-01-31 03:49:15.605050306 +0000 UTC m=+148.292130755" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.644504 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" podStartSLOduration=126.644483027 podStartE2EDuration="2m6.644483027s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:15.620669025 +0000 UTC m=+148.307749474" watchObservedRunningTime="2026-01-31 03:49:15.644483027 +0000 UTC m=+148.331563476" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.668138 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbd4n" podStartSLOduration=126.668114543 podStartE2EDuration="2m6.668114543s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:15.664835274 +0000 UTC m=+148.351915723" watchObservedRunningTime="2026-01-31 03:49:15.668114543 +0000 UTC m=+148.355194992" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.673252 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:15 crc kubenswrapper[4827]: E0131 03:49:15.674726 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:16.174711362 +0000 UTC m=+148.861791811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.695503 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8bbs" podStartSLOduration=126.695485473 podStartE2EDuration="2m6.695485473s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:15.693460945 +0000 UTC m=+148.380541394" watchObservedRunningTime="2026-01-31 03:49:15.695485473 +0000 UTC m=+148.382565922" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.751980 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xxh4j" podStartSLOduration=126.7519458 podStartE2EDuration="2m6.7519458s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:15.71677387 +0000 UTC m=+148.403854319" watchObservedRunningTime="2026-01-31 03:49:15.7519458 +0000 UTC m=+148.439026249" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.752600 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6qz7r" podStartSLOduration=126.752594991 podStartE2EDuration="2m6.752594991s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:15.751311469 +0000 UTC m=+148.438391918" watchObservedRunningTime="2026-01-31 03:49:15.752594991 +0000 UTC m=+148.439675440" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.776998 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:15 crc kubenswrapper[4827]: E0131 03:49:15.777384 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:16.277371285 +0000 UTC m=+148.964451724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.778062 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-lx7vs" podStartSLOduration=126.778042227 podStartE2EDuration="2m6.778042227s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:15.776266799 +0000 UTC m=+148.463347248" watchObservedRunningTime="2026-01-31 03:49:15.778042227 +0000 UTC m=+148.465122666" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.803947 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-t79lc" podStartSLOduration=8.803930148 podStartE2EDuration="8.803930148s" podCreationTimestamp="2026-01-31 03:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:15.803261756 +0000 UTC m=+148.490342215" watchObservedRunningTime="2026-01-31 03:49:15.803930148 +0000 UTC m=+148.491010607" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.831811 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-txptg" podStartSLOduration=126.83179384499999 podStartE2EDuration="2m6.831793845s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:15.830485972 +0000 UTC m=+148.517566411" watchObservedRunningTime="2026-01-31 03:49:15.831793845 +0000 UTC m=+148.518874294" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.872034 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-k4wl6" podStartSLOduration=126.872018232 podStartE2EDuration="2m6.872018232s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:15.853767175 +0000 UTC m=+148.540847624" watchObservedRunningTime="2026-01-31 03:49:15.872018232 +0000 UTC m=+148.559098681" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.875936 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mrplz" podStartSLOduration=126.875921311 podStartE2EDuration="2m6.875921311s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:15.872124456 +0000 UTC m=+148.559204895" watchObservedRunningTime="2026-01-31 03:49:15.875921311 +0000 UTC m=+148.563001760" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.880652 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:15 crc kubenswrapper[4827]: E0131 03:49:15.880851 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:16.380826035 +0000 UTC m=+149.067906484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.881011 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:15 crc kubenswrapper[4827]: E0131 03:49:15.882337 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:16.382320675 +0000 UTC m=+149.069401124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.985093 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:15 crc kubenswrapper[4827]: E0131 03:49:15.985267 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:16.485242226 +0000 UTC m=+149.172322665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.985673 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.985713 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.985737 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.985757 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.985817 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.986673 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:15 crc kubenswrapper[4827]: E0131 03:49:15.986944 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:16.486932033 +0000 UTC m=+149.174012532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.991763 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.991933 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:15 crc kubenswrapper[4827]: I0131 03:49:15.992470 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.083205 4827 patch_prober.go:28] interesting pod/router-default-5444994796-dxfqc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:49:16 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 31 03:49:16 crc kubenswrapper[4827]: [+]process-running ok Jan 31 03:49:16 crc kubenswrapper[4827]: healthz check failed Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.083268 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxfqc" podUID="85b31f82-0a7a-466c-aa0f-bffb46f2b04c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.086715 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:16 crc kubenswrapper[4827]: E0131 03:49:16.086899 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:16.586868996 +0000 UTC m=+149.273949445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.087106 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:16 crc kubenswrapper[4827]: E0131 03:49:16.087414 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:16.587402963 +0000 UTC m=+149.274483412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.128745 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.138423 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.188238 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:16 crc kubenswrapper[4827]: E0131 03:49:16.188440 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:16.688408672 +0000 UTC m=+149.375489131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.188521 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:16 crc kubenswrapper[4827]: E0131 03:49:16.188795 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:16.688784274 +0000 UTC m=+149.375864713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.232827 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.292483 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:16 crc kubenswrapper[4827]: E0131 03:49:16.292807 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:16.792783061 +0000 UTC m=+149.479863510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.292961 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:16 crc kubenswrapper[4827]: E0131 03:49:16.293387 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:16.793357591 +0000 UTC m=+149.480438050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.393509 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:16 crc kubenswrapper[4827]: E0131 03:49:16.393658 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:16.893635205 +0000 UTC m=+149.580715654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.393710 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:16 crc kubenswrapper[4827]: E0131 03:49:16.394010 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:16.893998907 +0000 UTC m=+149.581079356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.494649 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:16 crc kubenswrapper[4827]: E0131 03:49:16.495409 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:16.995393778 +0000 UTC m=+149.682474227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.588434 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4rp2t" event={"ID":"a1eac826-76c5-435a-abd1-16f7fe35350f","Type":"ContainerStarted","Data":"23cd1f9f5daef48d62e1494245b895ebde5b448775a93687ae7fda98bac02d64"} Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.589776 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbsqf" event={"ID":"6cf6cb35-7f31-44f5-ba34-be81eb109101","Type":"ContainerStarted","Data":"11168604d3e5940b7978968f942bcc93a584566940062b15dfb013a1501d06b4"} Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.591125 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xrt7q" event={"ID":"63af2706-4f52-4d40-941f-7575394bfefa","Type":"ContainerStarted","Data":"60f853b1481cebc9fbc85ecbe6b2c9d6b1909f60410a690e1b070f29e8f98f59"} Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.596120 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:16 crc kubenswrapper[4827]: E0131 03:49:16.596423 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:17.096405747 +0000 UTC m=+149.783486196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.598745 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-sw4rn" event={"ID":"0e71f06b-5ae9-4606-922f-eedf9f8eefa6","Type":"ContainerStarted","Data":"fdb48726989516e7c543f85417092b46d2db9de7761b3197d63074e9b251632f"} Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.601219 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vtzj8" event={"ID":"e9d269d7-f93e-4959-8e00-b541a0f9d9c2","Type":"ContainerStarted","Data":"aad5296c8286427b480ae02891f284505596e50d0b072f85bb658c1e36dec25d"} Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.602817 4827 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lrw5m container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.602894 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" podUID="cc0facf8-c192-4df4-bb9b-68f123fd7b21" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 31 03:49:16 crc kubenswrapper[4827]: W0131 03:49:16.603305 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-eb0d22538a6f0b70f1ae92b2e6becad0362396105b5929dd7dfb39e1ac9d2f4c WatchSource:0}: Error finding container eb0d22538a6f0b70f1ae92b2e6becad0362396105b5929dd7dfb39e1ac9d2f4c: Status 404 returned error can't find the container with id eb0d22538a6f0b70f1ae92b2e6becad0362396105b5929dd7dfb39e1ac9d2f4c Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.606135 4827 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-g8bbs container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.606175 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8bbs" podUID="beb2ca1a-f741-4cd2-8ae8-2a61972cd841" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.606225 4827 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-m7n8p container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.606250 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" podUID="775f917b-8a39-4e16-93b8-b285000c2758" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.613718 4827 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-dqndk container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.613769 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqndk" podUID="4a32abae-914d-4102-9e89-817922ff06ca" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.624405 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-mcs5z" podStartSLOduration=127.624388527 podStartE2EDuration="2m7.624388527s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:16.622451833 +0000 UTC m=+149.309532282" watchObservedRunningTime="2026-01-31 03:49:16.624388527 +0000 UTC m=+149.311468976" Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.641650 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxgr4" podStartSLOduration=127.641633301 podStartE2EDuration="2m7.641633301s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:16.639775578 +0000 UTC m=+149.326856027" watchObservedRunningTime="2026-01-31 03:49:16.641633301 +0000 UTC m=+149.328713750" Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.664000 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5p4mg" podStartSLOduration=128.663983424 podStartE2EDuration="2m8.663983424s" podCreationTimestamp="2026-01-31 03:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:16.660410115 +0000 UTC m=+149.347490564" watchObservedRunningTime="2026-01-31 03:49:16.663983424 +0000 UTC m=+149.351063873" Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.699460 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:16 crc kubenswrapper[4827]: E0131 03:49:16.700852 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:17.200837709 +0000 UTC m=+149.887918158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.800647 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:16 crc kubenswrapper[4827]: E0131 03:49:16.801064 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:17.301052531 +0000 UTC m=+149.988132980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:16 crc kubenswrapper[4827]: I0131 03:49:16.902015 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:16 crc kubenswrapper[4827]: E0131 03:49:16.902727 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:17.402712191 +0000 UTC m=+150.089792640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.003540 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:17 crc kubenswrapper[4827]: E0131 03:49:17.003849 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:17.503834682 +0000 UTC m=+150.190915131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.080962 4827 patch_prober.go:28] interesting pod/router-default-5444994796-dxfqc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:49:17 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 31 03:49:17 crc kubenswrapper[4827]: [+]process-running ok Jan 31 03:49:17 crc kubenswrapper[4827]: healthz check failed Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.081020 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxfqc" podUID="85b31f82-0a7a-466c-aa0f-bffb46f2b04c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.104640 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:17 crc kubenswrapper[4827]: E0131 03:49:17.104976 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:17.604962155 +0000 UTC m=+150.292042594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.205631 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:17 crc kubenswrapper[4827]: E0131 03:49:17.205964 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:17.705953063 +0000 UTC m=+150.393033502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.307030 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:17 crc kubenswrapper[4827]: E0131 03:49:17.307321 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:17.807306103 +0000 UTC m=+150.494386552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.371460 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.371532 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.408715 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:17 crc kubenswrapper[4827]: E0131 03:49:17.409169 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:17.909151649 +0000 UTC m=+150.596232088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.510717 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:17 crc kubenswrapper[4827]: E0131 03:49:17.510872 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:18.0108453 +0000 UTC m=+150.697925749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.511085 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:17 crc kubenswrapper[4827]: E0131 03:49:17.511389 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:18.011381908 +0000 UTC m=+150.698462347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.585199 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.591237 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-x2j9j" Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.606596 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e94ab0f39e84ff3cddf9524452ce953ea8b3a756d7038a2ccc70a8ee5afbafa6"} Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.606648 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"eb0d22538a6f0b70f1ae92b2e6becad0362396105b5929dd7dfb39e1ac9d2f4c"} Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.608723 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zhrf7" event={"ID":"05b0b4af-2e92-4831-a30e-cb951f41155c","Type":"ContainerStarted","Data":"385a31b7bcca713fca910cd154ae22757f99f06ffc9393beedc197d00d8af17b"} Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.608840 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-zhrf7" Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.610948 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h44q" event={"ID":"9cf25edb-a1c1-4077-97dc-8f87363a8d4e","Type":"ContainerStarted","Data":"7a06346fdcdee9529ede8c28a9ccdac62291d691822636521fb17746ca92afbf"} Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.612174 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:17 crc kubenswrapper[4827]: E0131 03:49:17.612338 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:18.112314524 +0000 UTC m=+150.799394983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.612482 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:17 crc kubenswrapper[4827]: E0131 03:49:17.612751 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:18.112741778 +0000 UTC m=+150.799822227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.618687 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" event={"ID":"bc4e2378-18bc-4624-acb0-a5010db62008","Type":"ContainerStarted","Data":"40b94af75442c82be7bc6c94f28d543aff1d5b2d7b4029865cf6068aa08f8091"} Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.632731 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3e126108ad32a5ff2e5ebed523b10707cd087cfcdd855bde392640308ed3501d"} Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.632774 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"88d448c163a409eefdb034cf6408bd9bf0f0f22b682d55609f13664f8b166b6b"} Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.637710 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2990ecdfbdea1929e59a0a2a7756c2feb492c6ff5e25c5a4fe3ee99d9ab4b4a3"} Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.640354 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-zhrf7" podStartSLOduration=10.640340446 podStartE2EDuration="10.640340446s" podCreationTimestamp="2026-01-31 03:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:17.637316435 +0000 UTC m=+150.324396884" watchObservedRunningTime="2026-01-31 03:49:17.640340446 +0000 UTC m=+150.327420895" Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.641011 4827 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lrw5m container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.641067 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" podUID="cc0facf8-c192-4df4-bb9b-68f123fd7b21" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.641011 4827 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-m7n8p container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.641361 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" podUID="775f917b-8a39-4e16-93b8-b285000c2758" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.713680 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:17 crc kubenswrapper[4827]: E0131 03:49:17.715000 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:18.214986097 +0000 UTC m=+150.902066536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.727040 4827 patch_prober.go:28] interesting pod/downloads-7954f5f757-sgzkm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.727363 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sgzkm" podUID="6e659a59-19ab-4c91-98ec-db3042ac1d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.728306 4827 patch_prober.go:28] interesting pod/downloads-7954f5f757-sgzkm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.728361 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-sgzkm" podUID="6e659a59-19ab-4c91-98ec-db3042ac1d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.817731 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:17 crc kubenswrapper[4827]: E0131 03:49:17.818123 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:18.318110677 +0000 UTC m=+151.005191126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.918506 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:17 crc kubenswrapper[4827]: E0131 03:49:17.918914 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:18.418898998 +0000 UTC m=+151.105979447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.928915 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" podStartSLOduration=128.92889896 podStartE2EDuration="2m8.92889896s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:17.802341432 +0000 UTC m=+150.489421881" watchObservedRunningTime="2026-01-31 03:49:17.92889896 +0000 UTC m=+150.615979409" Jan 31 03:49:17 crc kubenswrapper[4827]: I0131 03:49:17.929413 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-sw4rn" podStartSLOduration=128.929409407 podStartE2EDuration="2m8.929409407s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:17.917124808 +0000 UTC m=+150.604205257" watchObservedRunningTime="2026-01-31 03:49:17.929409407 +0000 UTC m=+150.616489856" Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.003455 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4rp2t" podStartSLOduration=129.003441098 podStartE2EDuration="2m9.003441098s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:17.99839368 +0000 UTC m=+150.685474129" watchObservedRunningTime="2026-01-31 03:49:18.003441098 +0000 UTC m=+150.690521547" Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.020223 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:18 crc kubenswrapper[4827]: E0131 03:49:18.020650 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:18.520629629 +0000 UTC m=+151.207710078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.055496 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xrt7q" podStartSLOduration=129.055478828 podStartE2EDuration="2m9.055478828s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:18.050685468 +0000 UTC m=+150.737765927" watchObservedRunningTime="2026-01-31 03:49:18.055478828 +0000 UTC m=+150.742559277" Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.080389 4827 patch_prober.go:28] interesting pod/router-default-5444994796-dxfqc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:49:18 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 31 03:49:18 crc kubenswrapper[4827]: [+]process-running ok Jan 31 03:49:18 crc kubenswrapper[4827]: healthz check failed Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.080464 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxfqc" podUID="85b31f82-0a7a-466c-aa0f-bffb46f2b04c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.084743 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vtzj8" podStartSLOduration=129.08472975 podStartE2EDuration="2m9.08472975s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:18.08349097 +0000 UTC m=+150.770571419" watchObservedRunningTime="2026-01-31 03:49:18.08472975 +0000 UTC m=+150.771810199" Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.121449 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:18 crc kubenswrapper[4827]: E0131 03:49:18.121649 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:18.621621627 +0000 UTC m=+151.308702076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.121915 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:18 crc kubenswrapper[4827]: E0131 03:49:18.122238 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:18.622230578 +0000 UTC m=+151.309311027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.131180 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbsqf" podStartSLOduration=129.131158415 podStartE2EDuration="2m9.131158415s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:18.129960464 +0000 UTC m=+150.817040913" watchObservedRunningTime="2026-01-31 03:49:18.131158415 +0000 UTC m=+150.818238884" Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.223273 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:18 crc kubenswrapper[4827]: E0131 03:49:18.223558 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:18.723518195 +0000 UTC m=+151.410598644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.223743 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:18 crc kubenswrapper[4827]: E0131 03:49:18.224065 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:18.724051173 +0000 UTC m=+151.411131622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.324757 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:18 crc kubenswrapper[4827]: E0131 03:49:18.324990 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:18.824950297 +0000 UTC m=+151.512030746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.325388 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:18 crc kubenswrapper[4827]: E0131 03:49:18.325812 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:18.825802986 +0000 UTC m=+151.512883435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.426174 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:18 crc kubenswrapper[4827]: E0131 03:49:18.426371 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:18.926345548 +0000 UTC m=+151.613425997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.426410 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:18 crc kubenswrapper[4827]: E0131 03:49:18.426780 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:18.926766103 +0000 UTC m=+151.613846542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.527825 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:18 crc kubenswrapper[4827]: E0131 03:49:18.527970 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:19.027937036 +0000 UTC m=+151.715017485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.528395 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:18 crc kubenswrapper[4827]: E0131 03:49:18.528706 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:19.028696342 +0000 UTC m=+151.715776791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.629322 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:18 crc kubenswrapper[4827]: E0131 03:49:18.629641 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:19.129600767 +0000 UTC m=+151.816681216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.629919 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:18 crc kubenswrapper[4827]: E0131 03:49:18.630281 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:19.130273388 +0000 UTC m=+151.817353837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.643912 4827 generic.go:334] "Generic (PLEG): container finished" podID="85e64cbb-2da9-4b05-b074-fabf16790f49" containerID="1305adac46bdd213152462f696bfa7537301d512e0d4472f686e0ec77334c02d" exitCode=0 Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.643980 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv" event={"ID":"85e64cbb-2da9-4b05-b074-fabf16790f49","Type":"ContainerDied","Data":"1305adac46bdd213152462f696bfa7537301d512e0d4472f686e0ec77334c02d"} Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.645251 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"471102d96f48444044b22f911b476c3f82b69cac42ff6454655445840e19aabf"} Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.731136 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:18 crc kubenswrapper[4827]: E0131 03:49:18.731378 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:19.231337609 +0000 UTC m=+151.918418058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.732043 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:18 crc kubenswrapper[4827]: E0131 03:49:18.732536 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:19.232518678 +0000 UTC m=+151.919599127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.832909 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:18 crc kubenswrapper[4827]: E0131 03:49:18.833097 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:19.333067781 +0000 UTC m=+152.020148230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.833169 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:18 crc kubenswrapper[4827]: E0131 03:49:18.833443 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:19.333431154 +0000 UTC m=+152.020511603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.934615 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:18 crc kubenswrapper[4827]: E0131 03:49:18.934780 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:19.434755243 +0000 UTC m=+152.121835692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:18 crc kubenswrapper[4827]: I0131 03:49:18.935328 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:18 crc kubenswrapper[4827]: E0131 03:49:18.935663 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:19.435653312 +0000 UTC m=+152.122733761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.036084 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:19 crc kubenswrapper[4827]: E0131 03:49:19.036219 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:19.536192605 +0000 UTC m=+152.223273054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.036410 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:19 crc kubenswrapper[4827]: E0131 03:49:19.036692 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:19.536678951 +0000 UTC m=+152.223759390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.082450 4827 patch_prober.go:28] interesting pod/router-default-5444994796-dxfqc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:49:19 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 31 03:49:19 crc kubenswrapper[4827]: [+]process-running ok Jan 31 03:49:19 crc kubenswrapper[4827]: healthz check failed Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.082540 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxfqc" podUID="85b31f82-0a7a-466c-aa0f-bffb46f2b04c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.111918 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g76wz"] Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.112914 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g76wz" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.118028 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.128627 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g76wz"] Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.137752 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:19 crc kubenswrapper[4827]: E0131 03:49:19.137896 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:19.637858255 +0000 UTC m=+152.324938704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.138110 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:19 crc kubenswrapper[4827]: E0131 03:49:19.138423 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:19.638415964 +0000 UTC m=+152.325496413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.239054 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.239293 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-995cj\" (UniqueName: \"kubernetes.io/projected/36f2dbb1-6370-4a38-8702-edf89c8b4668-kube-api-access-995cj\") pod \"certified-operators-g76wz\" (UID: \"36f2dbb1-6370-4a38-8702-edf89c8b4668\") " pod="openshift-marketplace/certified-operators-g76wz" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.239331 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36f2dbb1-6370-4a38-8702-edf89c8b4668-catalog-content\") pod \"certified-operators-g76wz\" (UID: \"36f2dbb1-6370-4a38-8702-edf89c8b4668\") " pod="openshift-marketplace/certified-operators-g76wz" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.239419 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36f2dbb1-6370-4a38-8702-edf89c8b4668-utilities\") pod \"certified-operators-g76wz\" (UID: \"36f2dbb1-6370-4a38-8702-edf89c8b4668\") " pod="openshift-marketplace/certified-operators-g76wz" Jan 31 03:49:19 crc kubenswrapper[4827]: E0131 03:49:19.239534 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:19.739518475 +0000 UTC m=+152.426598924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.340557 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36f2dbb1-6370-4a38-8702-edf89c8b4668-catalog-content\") pod \"certified-operators-g76wz\" (UID: \"36f2dbb1-6370-4a38-8702-edf89c8b4668\") " pod="openshift-marketplace/certified-operators-g76wz" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.340617 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.340651 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36f2dbb1-6370-4a38-8702-edf89c8b4668-utilities\") pod \"certified-operators-g76wz\" (UID: \"36f2dbb1-6370-4a38-8702-edf89c8b4668\") " pod="openshift-marketplace/certified-operators-g76wz" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.340692 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-995cj\" (UniqueName: \"kubernetes.io/projected/36f2dbb1-6370-4a38-8702-edf89c8b4668-kube-api-access-995cj\") pod \"certified-operators-g76wz\" (UID: \"36f2dbb1-6370-4a38-8702-edf89c8b4668\") " pod="openshift-marketplace/certified-operators-g76wz" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.341261 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36f2dbb1-6370-4a38-8702-edf89c8b4668-utilities\") pod \"certified-operators-g76wz\" (UID: \"36f2dbb1-6370-4a38-8702-edf89c8b4668\") " pod="openshift-marketplace/certified-operators-g76wz" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.341349 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36f2dbb1-6370-4a38-8702-edf89c8b4668-catalog-content\") pod \"certified-operators-g76wz\" (UID: \"36f2dbb1-6370-4a38-8702-edf89c8b4668\") " pod="openshift-marketplace/certified-operators-g76wz" Jan 31 03:49:19 crc kubenswrapper[4827]: E0131 03:49:19.341647 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:19.84163411 +0000 UTC m=+152.528714559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.369769 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-995cj\" (UniqueName: \"kubernetes.io/projected/36f2dbb1-6370-4a38-8702-edf89c8b4668-kube-api-access-995cj\") pod \"certified-operators-g76wz\" (UID: \"36f2dbb1-6370-4a38-8702-edf89c8b4668\") " pod="openshift-marketplace/certified-operators-g76wz" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.436621 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g76wz" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.442174 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:19 crc kubenswrapper[4827]: E0131 03:49:19.442295 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:19.942272486 +0000 UTC m=+152.629352935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.442483 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:19 crc kubenswrapper[4827]: E0131 03:49:19.442805 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:19.942797814 +0000 UTC m=+152.629878263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.523772 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8rcw5"] Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.524718 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rcw5" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.538031 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8rcw5"] Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.544119 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:19 crc kubenswrapper[4827]: E0131 03:49:19.544260 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:20.044237567 +0000 UTC m=+152.731318016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.544381 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:19 crc kubenswrapper[4827]: E0131 03:49:19.544704 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:20.044692062 +0000 UTC m=+152.731772511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.581343 4827 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-dqndk container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.581411 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqndk" podUID="4a32abae-914d-4102-9e89-817922ff06ca" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.581528 4827 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-dqndk container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.581594 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqndk" podUID="4a32abae-914d-4102-9e89-817922ff06ca" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.587736 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.591837 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.592066 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.594226 4827 patch_prober.go:28] interesting pod/console-f9d7485db-q4hqs container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.594262 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-q4hqs" podUID="a2a52a00-75ce-4094-bab7-913d6fbab1dc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.608976 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-57nc5" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.618384 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.645442 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:19 crc kubenswrapper[4827]: E0131 03:49:19.645629 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:20.145600637 +0000 UTC m=+152.832681086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.645695 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/062f8208-e13f-439f-bb1f-13b9c91c5ea3-utilities\") pod \"certified-operators-8rcw5\" (UID: \"062f8208-e13f-439f-bb1f-13b9c91c5ea3\") " pod="openshift-marketplace/certified-operators-8rcw5" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.645757 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.645783 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/062f8208-e13f-439f-bb1f-13b9c91c5ea3-catalog-content\") pod \"certified-operators-8rcw5\" (UID: \"062f8208-e13f-439f-bb1f-13b9c91c5ea3\") " pod="openshift-marketplace/certified-operators-8rcw5" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.645823 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkwk4\" (UniqueName: \"kubernetes.io/projected/062f8208-e13f-439f-bb1f-13b9c91c5ea3-kube-api-access-vkwk4\") pod \"certified-operators-8rcw5\" (UID: \"062f8208-e13f-439f-bb1f-13b9c91c5ea3\") " pod="openshift-marketplace/certified-operators-8rcw5" Jan 31 03:49:19 crc kubenswrapper[4827]: E0131 03:49:19.646158 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:20.146138625 +0000 UTC m=+152.833219074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.708513 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h54h5"] Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.710527 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h54h5" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.728532 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.733053 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h54h5"] Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.753726 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.754047 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/062f8208-e13f-439f-bb1f-13b9c91c5ea3-utilities\") pod \"certified-operators-8rcw5\" (UID: \"062f8208-e13f-439f-bb1f-13b9c91c5ea3\") " pod="openshift-marketplace/certified-operators-8rcw5" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.754192 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/062f8208-e13f-439f-bb1f-13b9c91c5ea3-catalog-content\") pod \"certified-operators-8rcw5\" (UID: \"062f8208-e13f-439f-bb1f-13b9c91c5ea3\") " pod="openshift-marketplace/certified-operators-8rcw5" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.754322 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkwk4\" (UniqueName: \"kubernetes.io/projected/062f8208-e13f-439f-bb1f-13b9c91c5ea3-kube-api-access-vkwk4\") pod \"certified-operators-8rcw5\" (UID: \"062f8208-e13f-439f-bb1f-13b9c91c5ea3\") " pod="openshift-marketplace/certified-operators-8rcw5" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.757909 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/062f8208-e13f-439f-bb1f-13b9c91c5ea3-catalog-content\") pod \"certified-operators-8rcw5\" (UID: \"062f8208-e13f-439f-bb1f-13b9c91c5ea3\") " pod="openshift-marketplace/certified-operators-8rcw5" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.763061 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/062f8208-e13f-439f-bb1f-13b9c91c5ea3-utilities\") pod \"certified-operators-8rcw5\" (UID: \"062f8208-e13f-439f-bb1f-13b9c91c5ea3\") " pod="openshift-marketplace/certified-operators-8rcw5" Jan 31 03:49:19 crc kubenswrapper[4827]: E0131 03:49:19.769162 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:20.269127454 +0000 UTC m=+152.956207893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.810812 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkwk4\" (UniqueName: \"kubernetes.io/projected/062f8208-e13f-439f-bb1f-13b9c91c5ea3-kube-api-access-vkwk4\") pod \"certified-operators-8rcw5\" (UID: \"062f8208-e13f-439f-bb1f-13b9c91c5ea3\") " pod="openshift-marketplace/certified-operators-8rcw5" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.857082 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4c93e4f-eac3-4794-a748-51adfd8b961c-utilities\") pod \"community-operators-h54h5\" (UID: \"a4c93e4f-eac3-4794-a748-51adfd8b961c\") " pod="openshift-marketplace/community-operators-h54h5" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.857139 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7lw5\" (UniqueName: \"kubernetes.io/projected/a4c93e4f-eac3-4794-a748-51adfd8b961c-kube-api-access-m7lw5\") pod \"community-operators-h54h5\" (UID: \"a4c93e4f-eac3-4794-a748-51adfd8b961c\") " pod="openshift-marketplace/community-operators-h54h5" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.857175 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.857239 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4c93e4f-eac3-4794-a748-51adfd8b961c-catalog-content\") pod \"community-operators-h54h5\" (UID: \"a4c93e4f-eac3-4794-a748-51adfd8b961c\") " pod="openshift-marketplace/community-operators-h54h5" Jan 31 03:49:19 crc kubenswrapper[4827]: E0131 03:49:19.857492 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:20.357480201 +0000 UTC m=+153.044560650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.893220 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rcw5" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.918172 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.918216 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.918230 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wr7t4"] Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.919344 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wr7t4" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.936849 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wr7t4"] Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.936916 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.937498 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.939921 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.941064 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.955504 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.958414 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.958601 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4c93e4f-eac3-4794-a748-51adfd8b961c-catalog-content\") pod \"community-operators-h54h5\" (UID: \"a4c93e4f-eac3-4794-a748-51adfd8b961c\") " pod="openshift-marketplace/community-operators-h54h5" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.958693 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4c93e4f-eac3-4794-a748-51adfd8b961c-utilities\") pod \"community-operators-h54h5\" (UID: \"a4c93e4f-eac3-4794-a748-51adfd8b961c\") " pod="openshift-marketplace/community-operators-h54h5" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.958715 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7lw5\" (UniqueName: \"kubernetes.io/projected/a4c93e4f-eac3-4794-a748-51adfd8b961c-kube-api-access-m7lw5\") pod \"community-operators-h54h5\" (UID: \"a4c93e4f-eac3-4794-a748-51adfd8b961c\") " pod="openshift-marketplace/community-operators-h54h5" Jan 31 03:49:19 crc kubenswrapper[4827]: E0131 03:49:19.959126 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:20.45911089 +0000 UTC m=+153.146191339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.959469 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4c93e4f-eac3-4794-a748-51adfd8b961c-catalog-content\") pod \"community-operators-h54h5\" (UID: \"a4c93e4f-eac3-4794-a748-51adfd8b961c\") " pod="openshift-marketplace/community-operators-h54h5" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.959671 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4c93e4f-eac3-4794-a748-51adfd8b961c-utilities\") pod \"community-operators-h54h5\" (UID: \"a4c93e4f-eac3-4794-a748-51adfd8b961c\") " pod="openshift-marketplace/community-operators-h54h5" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.960911 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g4946" Jan 31 03:49:19 crc kubenswrapper[4827]: I0131 03:49:19.983628 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7lw5\" (UniqueName: \"kubernetes.io/projected/a4c93e4f-eac3-4794-a748-51adfd8b961c-kube-api-access-m7lw5\") pod \"community-operators-h54h5\" (UID: \"a4c93e4f-eac3-4794-a748-51adfd8b961c\") " pod="openshift-marketplace/community-operators-h54h5" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.005324 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.061411 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31cfbd2b-aaa8-4644-9cf5-9271189cf0bd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"31cfbd2b-aaa8-4644-9cf5-9271189cf0bd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.061468 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.061496 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/273683f4-0b94-44d7-83a2-b540f4d5d81d-utilities\") pod \"community-operators-wr7t4\" (UID: \"273683f4-0b94-44d7-83a2-b540f4d5d81d\") " pod="openshift-marketplace/community-operators-wr7t4" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.061550 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssk2f\" (UniqueName: \"kubernetes.io/projected/273683f4-0b94-44d7-83a2-b540f4d5d81d-kube-api-access-ssk2f\") pod \"community-operators-wr7t4\" (UID: \"273683f4-0b94-44d7-83a2-b540f4d5d81d\") " pod="openshift-marketplace/community-operators-wr7t4" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.061622 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/273683f4-0b94-44d7-83a2-b540f4d5d81d-catalog-content\") pod \"community-operators-wr7t4\" (UID: \"273683f4-0b94-44d7-83a2-b540f4d5d81d\") " pod="openshift-marketplace/community-operators-wr7t4" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.061728 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31cfbd2b-aaa8-4644-9cf5-9271189cf0bd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"31cfbd2b-aaa8-4644-9cf5-9271189cf0bd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 03:49:20 crc kubenswrapper[4827]: E0131 03:49:20.062962 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:20.562945723 +0000 UTC m=+153.250026382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.076976 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.080718 4827 patch_prober.go:28] interesting pod/router-default-5444994796-dxfqc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:49:20 crc kubenswrapper[4827]: [-]has-synced failed: reason withheld Jan 31 03:49:20 crc kubenswrapper[4827]: [+]process-running ok Jan 31 03:49:20 crc kubenswrapper[4827]: healthz check failed Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.080764 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxfqc" podUID="85b31f82-0a7a-466c-aa0f-bffb46f2b04c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.098043 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h54h5" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.162477 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85e64cbb-2da9-4b05-b074-fabf16790f49-config-volume\") pod \"85e64cbb-2da9-4b05-b074-fabf16790f49\" (UID: \"85e64cbb-2da9-4b05-b074-fabf16790f49\") " Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.162600 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85e64cbb-2da9-4b05-b074-fabf16790f49-secret-volume\") pod \"85e64cbb-2da9-4b05-b074-fabf16790f49\" (UID: \"85e64cbb-2da9-4b05-b074-fabf16790f49\") " Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.162697 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.162737 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn4lt\" (UniqueName: \"kubernetes.io/projected/85e64cbb-2da9-4b05-b074-fabf16790f49-kube-api-access-rn4lt\") pod \"85e64cbb-2da9-4b05-b074-fabf16790f49\" (UID: \"85e64cbb-2da9-4b05-b074-fabf16790f49\") " Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.162971 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31cfbd2b-aaa8-4644-9cf5-9271189cf0bd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"31cfbd2b-aaa8-4644-9cf5-9271189cf0bd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.162993 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85e64cbb-2da9-4b05-b074-fabf16790f49-config-volume" (OuterVolumeSpecName: "config-volume") pod "85e64cbb-2da9-4b05-b074-fabf16790f49" (UID: "85e64cbb-2da9-4b05-b074-fabf16790f49"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.163019 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31cfbd2b-aaa8-4644-9cf5-9271189cf0bd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"31cfbd2b-aaa8-4644-9cf5-9271189cf0bd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.163049 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/273683f4-0b94-44d7-83a2-b540f4d5d81d-utilities\") pod \"community-operators-wr7t4\" (UID: \"273683f4-0b94-44d7-83a2-b540f4d5d81d\") " pod="openshift-marketplace/community-operators-wr7t4" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.163072 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssk2f\" (UniqueName: \"kubernetes.io/projected/273683f4-0b94-44d7-83a2-b540f4d5d81d-kube-api-access-ssk2f\") pod \"community-operators-wr7t4\" (UID: \"273683f4-0b94-44d7-83a2-b540f4d5d81d\") " pod="openshift-marketplace/community-operators-wr7t4" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.163105 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/273683f4-0b94-44d7-83a2-b540f4d5d81d-catalog-content\") pod \"community-operators-wr7t4\" (UID: \"273683f4-0b94-44d7-83a2-b540f4d5d81d\") " pod="openshift-marketplace/community-operators-wr7t4" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.163164 4827 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85e64cbb-2da9-4b05-b074-fabf16790f49-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 03:49:20 crc kubenswrapper[4827]: E0131 03:49:20.163459 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:20.663435354 +0000 UTC m=+153.350515803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.163856 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31cfbd2b-aaa8-4644-9cf5-9271189cf0bd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"31cfbd2b-aaa8-4644-9cf5-9271189cf0bd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.164364 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/273683f4-0b94-44d7-83a2-b540f4d5d81d-catalog-content\") pod \"community-operators-wr7t4\" (UID: \"273683f4-0b94-44d7-83a2-b540f4d5d81d\") " pod="openshift-marketplace/community-operators-wr7t4" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.165958 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/273683f4-0b94-44d7-83a2-b540f4d5d81d-utilities\") pod \"community-operators-wr7t4\" (UID: \"273683f4-0b94-44d7-83a2-b540f4d5d81d\") " pod="openshift-marketplace/community-operators-wr7t4" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.169857 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e64cbb-2da9-4b05-b074-fabf16790f49-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "85e64cbb-2da9-4b05-b074-fabf16790f49" (UID: "85e64cbb-2da9-4b05-b074-fabf16790f49"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.177985 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85e64cbb-2da9-4b05-b074-fabf16790f49-kube-api-access-rn4lt" (OuterVolumeSpecName: "kube-api-access-rn4lt") pod "85e64cbb-2da9-4b05-b074-fabf16790f49" (UID: "85e64cbb-2da9-4b05-b074-fabf16790f49"). InnerVolumeSpecName "kube-api-access-rn4lt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.188961 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssk2f\" (UniqueName: \"kubernetes.io/projected/273683f4-0b94-44d7-83a2-b540f4d5d81d-kube-api-access-ssk2f\") pod \"community-operators-wr7t4\" (UID: \"273683f4-0b94-44d7-83a2-b540f4d5d81d\") " pod="openshift-marketplace/community-operators-wr7t4" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.208026 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31cfbd2b-aaa8-4644-9cf5-9271189cf0bd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"31cfbd2b-aaa8-4644-9cf5-9271189cf0bd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.232054 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbsqf" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.267555 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8rcw5"] Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.275376 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.275483 4827 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85e64cbb-2da9-4b05-b074-fabf16790f49-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.275496 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn4lt\" (UniqueName: \"kubernetes.io/projected/85e64cbb-2da9-4b05-b074-fabf16790f49-kube-api-access-rn4lt\") on node \"crc\" DevicePath \"\"" Jan 31 03:49:20 crc kubenswrapper[4827]: E0131 03:49:20.275771 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:20.775757929 +0000 UTC m=+153.462838368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.276454 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g76wz"] Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.287525 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wr7t4" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.304535 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.305125 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8bbs" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.355286 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.378990 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:20 crc kubenswrapper[4827]: E0131 03:49:20.379242 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:20.879213188 +0000 UTC m=+153.566293637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.379307 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:20 crc kubenswrapper[4827]: E0131 03:49:20.379724 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:20.879712155 +0000 UTC m=+153.566792604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.396666 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h54h5"] Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.438637 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m7n8p" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.481239 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:20 crc kubenswrapper[4827]: E0131 03:49:20.481643 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:20.981622423 +0000 UTC m=+153.668702872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.481812 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:20 crc kubenswrapper[4827]: E0131 03:49:20.483897 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:20.983862277 +0000 UTC m=+153.670942726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.578729 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.584519 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:20 crc kubenswrapper[4827]: E0131 03:49:20.584755 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:21.084730371 +0000 UTC m=+153.771810820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.584853 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:20 crc kubenswrapper[4827]: E0131 03:49:20.586169 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:21.086155299 +0000 UTC m=+153.773235748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.615250 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.663611 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv" event={"ID":"85e64cbb-2da9-4b05-b074-fabf16790f49","Type":"ContainerDied","Data":"f0a1f142d9597893f251cc8f4b2717cd8132bf45bbd1875c6f869eebfc7fd8bf"} Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.663655 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0a1f142d9597893f251cc8f4b2717cd8132bf45bbd1875c6f869eebfc7fd8bf" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.663761 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.670333 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rcw5" event={"ID":"062f8208-e13f-439f-bb1f-13b9c91c5ea3","Type":"ContainerStarted","Data":"e147c7c68c6fdeb1216d39a488d737fdebc6bb3e183b28c77ba1e3969dc2b118"} Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.675012 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g76wz" event={"ID":"36f2dbb1-6370-4a38-8702-edf89c8b4668","Type":"ContainerStarted","Data":"fd49f6fda1c3dddeb8bd8294f20d4fd7d13fae8241d815d50ed8d8a4172051f4"} Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.684037 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h54h5" event={"ID":"a4c93e4f-eac3-4794-a748-51adfd8b961c","Type":"ContainerStarted","Data":"758e955d86979f68ce3ffd3e195b09f7d2743170d7cfd368ceb62603ea60681c"} Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.687316 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:20 crc kubenswrapper[4827]: E0131 03:49:20.688929 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:21.188905905 +0000 UTC m=+153.875986354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.700287 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrjxm" Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.712825 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 03:49:20 crc kubenswrapper[4827]: W0131 03:49:20.724701 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod31cfbd2b_aaa8_4644_9cf5_9271189cf0bd.slice/crio-674c7f3eeae9efe37a1bb06f826e68cf4b881cddfb2cbc2e911d147ab956394c WatchSource:0}: Error finding container 674c7f3eeae9efe37a1bb06f826e68cf4b881cddfb2cbc2e911d147ab956394c: Status 404 returned error can't find the container with id 674c7f3eeae9efe37a1bb06f826e68cf4b881cddfb2cbc2e911d147ab956394c Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.789635 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:20 crc kubenswrapper[4827]: E0131 03:49:20.790745 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:21.29072793 +0000 UTC m=+153.977808379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.820666 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wr7t4"] Jan 31 03:49:20 crc kubenswrapper[4827]: W0131 03:49:20.830314 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod273683f4_0b94_44d7_83a2_b540f4d5d81d.slice/crio-781407d424e12fe73595ad6f760eed4f0ca282f96b8a0ff6c8ff99a0816ae35c WatchSource:0}: Error finding container 781407d424e12fe73595ad6f760eed4f0ca282f96b8a0ff6c8ff99a0816ae35c: Status 404 returned error can't find the container with id 781407d424e12fe73595ad6f760eed4f0ca282f96b8a0ff6c8ff99a0816ae35c Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.891469 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:20 crc kubenswrapper[4827]: E0131 03:49:20.891971 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:21.391948896 +0000 UTC m=+154.079029345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:20 crc kubenswrapper[4827]: I0131 03:49:20.992643 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:20 crc kubenswrapper[4827]: E0131 03:49:20.992944 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:21.492931573 +0000 UTC m=+154.180012022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.081164 4827 patch_prober.go:28] interesting pod/router-default-5444994796-dxfqc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:49:21 crc kubenswrapper[4827]: [+]has-synced ok Jan 31 03:49:21 crc kubenswrapper[4827]: [+]process-running ok Jan 31 03:49:21 crc kubenswrapper[4827]: healthz check failed Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.081242 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxfqc" podUID="85b31f82-0a7a-466c-aa0f-bffb46f2b04c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.094362 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:21 crc kubenswrapper[4827]: E0131 03:49:21.094845 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:21.594824151 +0000 UTC m=+154.281904590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.196025 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:21 crc kubenswrapper[4827]: E0131 03:49:21.196583 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:21.696559873 +0000 UTC m=+154.383640322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.296982 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:21 crc kubenswrapper[4827]: E0131 03:49:21.297269 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:21.79723022 +0000 UTC m=+154.484310669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.297333 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:21 crc kubenswrapper[4827]: E0131 03:49:21.297821 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:21.79780166 +0000 UTC m=+154.484882109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.398896 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:21 crc kubenswrapper[4827]: E0131 03:49:21.399125 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:21.899092757 +0000 UTC m=+154.586173206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.399220 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:21 crc kubenswrapper[4827]: E0131 03:49:21.399517 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:21.899507641 +0000 UTC m=+154.586588080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.500640 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:21 crc kubenswrapper[4827]: E0131 03:49:21.501131 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:22.001110949 +0000 UTC m=+154.688191398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.502363 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jpjgt"] Jan 31 03:49:21 crc kubenswrapper[4827]: E0131 03:49:21.502551 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85e64cbb-2da9-4b05-b074-fabf16790f49" containerName="collect-profiles" Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.502569 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="85e64cbb-2da9-4b05-b074-fabf16790f49" containerName="collect-profiles" Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.502664 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="85e64cbb-2da9-4b05-b074-fabf16790f49" containerName="collect-profiles" Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.516608 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpjgt" Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.525561 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.562511 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpjgt"] Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.575066 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqndk" Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.604679 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b53b07cf-d0d5-4774-89fe-89765537cc9b-utilities\") pod \"redhat-marketplace-jpjgt\" (UID: \"b53b07cf-d0d5-4774-89fe-89765537cc9b\") " pod="openshift-marketplace/redhat-marketplace-jpjgt" Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.604731 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqt67\" (UniqueName: \"kubernetes.io/projected/b53b07cf-d0d5-4774-89fe-89765537cc9b-kube-api-access-xqt67\") pod \"redhat-marketplace-jpjgt\" (UID: \"b53b07cf-d0d5-4774-89fe-89765537cc9b\") " pod="openshift-marketplace/redhat-marketplace-jpjgt" Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.604778 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.604803 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b53b07cf-d0d5-4774-89fe-89765537cc9b-catalog-content\") pod \"redhat-marketplace-jpjgt\" (UID: \"b53b07cf-d0d5-4774-89fe-89765537cc9b\") " pod="openshift-marketplace/redhat-marketplace-jpjgt" Jan 31 03:49:21 crc kubenswrapper[4827]: E0131 03:49:21.605062 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:22.105050625 +0000 UTC m=+154.792131074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.687732 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"31cfbd2b-aaa8-4644-9cf5-9271189cf0bd","Type":"ContainerStarted","Data":"674c7f3eeae9efe37a1bb06f826e68cf4b881cddfb2cbc2e911d147ab956394c"} Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.688616 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wr7t4" event={"ID":"273683f4-0b94-44d7-83a2-b540f4d5d81d","Type":"ContainerStarted","Data":"781407d424e12fe73595ad6f760eed4f0ca282f96b8a0ff6c8ff99a0816ae35c"} Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.689739 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h54h5" event={"ID":"a4c93e4f-eac3-4794-a748-51adfd8b961c","Type":"ContainerStarted","Data":"ca43bc891aac1130290b11b161fdc1b14a06c9f39b9676d52b481529a3572a03"} Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.690987 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h44q" event={"ID":"9cf25edb-a1c1-4077-97dc-8f87363a8d4e","Type":"ContainerStarted","Data":"d9810f0ab694c84298266f2476b1d7bcb399542df78e6b3864361bc85ac02e85"} Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.706009 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.706190 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b53b07cf-d0d5-4774-89fe-89765537cc9b-utilities\") pod \"redhat-marketplace-jpjgt\" (UID: \"b53b07cf-d0d5-4774-89fe-89765537cc9b\") " pod="openshift-marketplace/redhat-marketplace-jpjgt" Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.706243 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqt67\" (UniqueName: \"kubernetes.io/projected/b53b07cf-d0d5-4774-89fe-89765537cc9b-kube-api-access-xqt67\") pod \"redhat-marketplace-jpjgt\" (UID: \"b53b07cf-d0d5-4774-89fe-89765537cc9b\") " pod="openshift-marketplace/redhat-marketplace-jpjgt" Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.706290 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b53b07cf-d0d5-4774-89fe-89765537cc9b-catalog-content\") pod \"redhat-marketplace-jpjgt\" (UID: \"b53b07cf-d0d5-4774-89fe-89765537cc9b\") " pod="openshift-marketplace/redhat-marketplace-jpjgt" Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.706644 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b53b07cf-d0d5-4774-89fe-89765537cc9b-catalog-content\") pod \"redhat-marketplace-jpjgt\" (UID: \"b53b07cf-d0d5-4774-89fe-89765537cc9b\") " pod="openshift-marketplace/redhat-marketplace-jpjgt" Jan 31 03:49:21 crc kubenswrapper[4827]: E0131 03:49:21.706715 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:22.206700315 +0000 UTC m=+154.893780764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.706958 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b53b07cf-d0d5-4774-89fe-89765537cc9b-utilities\") pod \"redhat-marketplace-jpjgt\" (UID: \"b53b07cf-d0d5-4774-89fe-89765537cc9b\") " pod="openshift-marketplace/redhat-marketplace-jpjgt" Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.730981 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqt67\" (UniqueName: \"kubernetes.io/projected/b53b07cf-d0d5-4774-89fe-89765537cc9b-kube-api-access-xqt67\") pod \"redhat-marketplace-jpjgt\" (UID: \"b53b07cf-d0d5-4774-89fe-89765537cc9b\") " pod="openshift-marketplace/redhat-marketplace-jpjgt" Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.808101 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:21 crc kubenswrapper[4827]: E0131 03:49:21.809073 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:22.309055558 +0000 UTC m=+154.996136007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.863873 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpjgt" Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.899487 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r4gn2"] Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.900505 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4gn2" Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.909554 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:21 crc kubenswrapper[4827]: E0131 03:49:21.909704 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:22.409682003 +0000 UTC m=+155.096762452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.909836 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:21 crc kubenswrapper[4827]: E0131 03:49:21.910142 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:22.410129928 +0000 UTC m=+155.097210377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:21 crc kubenswrapper[4827]: I0131 03:49:21.914773 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4gn2"] Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.010665 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:22 crc kubenswrapper[4827]: E0131 03:49:22.010913 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:22.510866698 +0000 UTC m=+155.197947147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.010980 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.011107 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94e2d804-29e9-4233-adda-45072b493f0f-utilities\") pod \"redhat-marketplace-r4gn2\" (UID: \"94e2d804-29e9-4233-adda-45072b493f0f\") " pod="openshift-marketplace/redhat-marketplace-r4gn2" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.011184 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94e2d804-29e9-4233-adda-45072b493f0f-catalog-content\") pod \"redhat-marketplace-r4gn2\" (UID: \"94e2d804-29e9-4233-adda-45072b493f0f\") " pod="openshift-marketplace/redhat-marketplace-r4gn2" Jan 31 03:49:22 crc kubenswrapper[4827]: E0131 03:49:22.011255 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:22.51124304 +0000 UTC m=+155.198323489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.011335 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87x6p\" (UniqueName: \"kubernetes.io/projected/94e2d804-29e9-4233-adda-45072b493f0f-kube-api-access-87x6p\") pod \"redhat-marketplace-r4gn2\" (UID: \"94e2d804-29e9-4233-adda-45072b493f0f\") " pod="openshift-marketplace/redhat-marketplace-r4gn2" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.084171 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.086291 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-dxfqc" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.112611 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:22 crc kubenswrapper[4827]: E0131 03:49:22.112779 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:22.612753055 +0000 UTC m=+155.299833504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.113034 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94e2d804-29e9-4233-adda-45072b493f0f-catalog-content\") pod \"redhat-marketplace-r4gn2\" (UID: \"94e2d804-29e9-4233-adda-45072b493f0f\") " pod="openshift-marketplace/redhat-marketplace-r4gn2" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.113081 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87x6p\" (UniqueName: \"kubernetes.io/projected/94e2d804-29e9-4233-adda-45072b493f0f-kube-api-access-87x6p\") pod \"redhat-marketplace-r4gn2\" (UID: \"94e2d804-29e9-4233-adda-45072b493f0f\") " pod="openshift-marketplace/redhat-marketplace-r4gn2" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.113139 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.113166 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94e2d804-29e9-4233-adda-45072b493f0f-utilities\") pod \"redhat-marketplace-r4gn2\" (UID: \"94e2d804-29e9-4233-adda-45072b493f0f\") " pod="openshift-marketplace/redhat-marketplace-r4gn2" Jan 31 03:49:22 crc kubenswrapper[4827]: E0131 03:49:22.113427 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:22.613412517 +0000 UTC m=+155.300492966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.113924 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94e2d804-29e9-4233-adda-45072b493f0f-utilities\") pod \"redhat-marketplace-r4gn2\" (UID: \"94e2d804-29e9-4233-adda-45072b493f0f\") " pod="openshift-marketplace/redhat-marketplace-r4gn2" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.113986 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94e2d804-29e9-4233-adda-45072b493f0f-catalog-content\") pod \"redhat-marketplace-r4gn2\" (UID: \"94e2d804-29e9-4233-adda-45072b493f0f\") " pod="openshift-marketplace/redhat-marketplace-r4gn2" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.152554 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87x6p\" (UniqueName: \"kubernetes.io/projected/94e2d804-29e9-4233-adda-45072b493f0f-kube-api-access-87x6p\") pod \"redhat-marketplace-r4gn2\" (UID: \"94e2d804-29e9-4233-adda-45072b493f0f\") " pod="openshift-marketplace/redhat-marketplace-r4gn2" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.214170 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4gn2" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.215313 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:22 crc kubenswrapper[4827]: E0131 03:49:22.216069 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:22.716040339 +0000 UTC m=+155.403120778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.316603 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:22 crc kubenswrapper[4827]: E0131 03:49:22.316951 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:22.816939814 +0000 UTC m=+155.504020263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.380812 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpjgt"] Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.417400 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:22 crc kubenswrapper[4827]: E0131 03:49:22.417562 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:22.917534609 +0000 UTC m=+155.604615058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.417633 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:22 crc kubenswrapper[4827]: E0131 03:49:22.417941 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:22.917929761 +0000 UTC m=+155.605010210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.499137 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8lngw"] Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.500167 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8lngw" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.502520 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.513798 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8lngw"] Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.518763 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:22 crc kubenswrapper[4827]: E0131 03:49:22.528587 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:23.019381614 +0000 UTC m=+155.706462063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.621064 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.621154 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e357c738-a2f2-49a3-b122-5fe5ab45b919-utilities\") pod \"redhat-operators-8lngw\" (UID: \"e357c738-a2f2-49a3-b122-5fe5ab45b919\") " pod="openshift-marketplace/redhat-operators-8lngw" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.621196 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfdxt\" (UniqueName: \"kubernetes.io/projected/e357c738-a2f2-49a3-b122-5fe5ab45b919-kube-api-access-wfdxt\") pod \"redhat-operators-8lngw\" (UID: \"e357c738-a2f2-49a3-b122-5fe5ab45b919\") " pod="openshift-marketplace/redhat-operators-8lngw" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.621230 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e357c738-a2f2-49a3-b122-5fe5ab45b919-catalog-content\") pod \"redhat-operators-8lngw\" (UID: \"e357c738-a2f2-49a3-b122-5fe5ab45b919\") " pod="openshift-marketplace/redhat-operators-8lngw" Jan 31 03:49:22 crc kubenswrapper[4827]: E0131 03:49:22.621629 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:23.121613664 +0000 UTC m=+155.808694113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.696193 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpjgt" event={"ID":"b53b07cf-d0d5-4774-89fe-89765537cc9b","Type":"ContainerStarted","Data":"f9c1c237840d075483018504809f3e35cdb3db93d8bde70dd1facf82863b0958"} Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.697742 4827 generic.go:334] "Generic (PLEG): container finished" podID="a4c93e4f-eac3-4794-a748-51adfd8b961c" containerID="ca43bc891aac1130290b11b161fdc1b14a06c9f39b9676d52b481529a3572a03" exitCode=0 Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.697846 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h54h5" event={"ID":"a4c93e4f-eac3-4794-a748-51adfd8b961c","Type":"ContainerDied","Data":"ca43bc891aac1130290b11b161fdc1b14a06c9f39b9676d52b481529a3572a03"} Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.705770 4827 generic.go:334] "Generic (PLEG): container finished" podID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" containerID="4846de233bd7ed84754fd62623000baf8673408027d4e1acbd2cfd0821561fcf" exitCode=0 Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.705842 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rcw5" event={"ID":"062f8208-e13f-439f-bb1f-13b9c91c5ea3","Type":"ContainerDied","Data":"4846de233bd7ed84754fd62623000baf8673408027d4e1acbd2cfd0821561fcf"} Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.707938 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4gn2"] Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.719650 4827 generic.go:334] "Generic (PLEG): container finished" podID="36f2dbb1-6370-4a38-8702-edf89c8b4668" containerID="aab2e5746993f3c43392059b3ef93d34a7eed7e2e15d13dc0dd7c0daaae2c51d" exitCode=0 Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.719733 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g76wz" event={"ID":"36f2dbb1-6370-4a38-8702-edf89c8b4668","Type":"ContainerDied","Data":"aab2e5746993f3c43392059b3ef93d34a7eed7e2e15d13dc0dd7c0daaae2c51d"} Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.722199 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.722433 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e357c738-a2f2-49a3-b122-5fe5ab45b919-utilities\") pod \"redhat-operators-8lngw\" (UID: \"e357c738-a2f2-49a3-b122-5fe5ab45b919\") " pod="openshift-marketplace/redhat-operators-8lngw" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.722478 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfdxt\" (UniqueName: \"kubernetes.io/projected/e357c738-a2f2-49a3-b122-5fe5ab45b919-kube-api-access-wfdxt\") pod \"redhat-operators-8lngw\" (UID: \"e357c738-a2f2-49a3-b122-5fe5ab45b919\") " pod="openshift-marketplace/redhat-operators-8lngw" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.722505 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e357c738-a2f2-49a3-b122-5fe5ab45b919-catalog-content\") pod \"redhat-operators-8lngw\" (UID: \"e357c738-a2f2-49a3-b122-5fe5ab45b919\") " pod="openshift-marketplace/redhat-operators-8lngw" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.722925 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e357c738-a2f2-49a3-b122-5fe5ab45b919-catalog-content\") pod \"redhat-operators-8lngw\" (UID: \"e357c738-a2f2-49a3-b122-5fe5ab45b919\") " pod="openshift-marketplace/redhat-operators-8lngw" Jan 31 03:49:22 crc kubenswrapper[4827]: E0131 03:49:22.722996 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:23.222981704 +0000 UTC m=+155.910062153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.723187 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e357c738-a2f2-49a3-b122-5fe5ab45b919-utilities\") pod \"redhat-operators-8lngw\" (UID: \"e357c738-a2f2-49a3-b122-5fe5ab45b919\") " pod="openshift-marketplace/redhat-operators-8lngw" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.775563 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfdxt\" (UniqueName: \"kubernetes.io/projected/e357c738-a2f2-49a3-b122-5fe5ab45b919-kube-api-access-wfdxt\") pod \"redhat-operators-8lngw\" (UID: \"e357c738-a2f2-49a3-b122-5fe5ab45b919\") " pod="openshift-marketplace/redhat-operators-8lngw" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.824706 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:22 crc kubenswrapper[4827]: E0131 03:49:22.825150 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:23.32513447 +0000 UTC m=+156.012214919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.830456 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8lngw" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.882669 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.884092 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.886818 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.887418 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.906001 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c2d85"] Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.907388 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c2d85" Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.909546 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.924836 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c2d85"] Jan 31 03:49:22 crc kubenswrapper[4827]: I0131 03:49:22.925565 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:22 crc kubenswrapper[4827]: E0131 03:49:22.925937 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:23.425919821 +0000 UTC m=+156.113000270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.027906 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3ede25d-6d79-44f7-a853-88b36723eb92-utilities\") pod \"redhat-operators-c2d85\" (UID: \"f3ede25d-6d79-44f7-a853-88b36723eb92\") " pod="openshift-marketplace/redhat-operators-c2d85" Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.027947 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltnf6\" (UniqueName: \"kubernetes.io/projected/f3ede25d-6d79-44f7-a853-88b36723eb92-kube-api-access-ltnf6\") pod \"redhat-operators-c2d85\" (UID: \"f3ede25d-6d79-44f7-a853-88b36723eb92\") " pod="openshift-marketplace/redhat-operators-c2d85" Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.027974 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:23 crc kubenswrapper[4827]: E0131 03:49:23.028247 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:23.528235693 +0000 UTC m=+156.215316142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.028238 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5b2583f-85ab-4e14-b053-3396a1c20c12-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b5b2583f-85ab-4e14-b053-3396a1c20c12\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.028312 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3ede25d-6d79-44f7-a853-88b36723eb92-catalog-content\") pod \"redhat-operators-c2d85\" (UID: \"f3ede25d-6d79-44f7-a853-88b36723eb92\") " pod="openshift-marketplace/redhat-operators-c2d85" Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.028403 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5b2583f-85ab-4e14-b053-3396a1c20c12-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b5b2583f-85ab-4e14-b053-3396a1c20c12\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.076074 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8lngw"] Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.129854 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:23 crc kubenswrapper[4827]: E0131 03:49:23.130037 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:23.630005407 +0000 UTC m=+156.317085866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.130202 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3ede25d-6d79-44f7-a853-88b36723eb92-utilities\") pod \"redhat-operators-c2d85\" (UID: \"f3ede25d-6d79-44f7-a853-88b36723eb92\") " pod="openshift-marketplace/redhat-operators-c2d85" Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.130236 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltnf6\" (UniqueName: \"kubernetes.io/projected/f3ede25d-6d79-44f7-a853-88b36723eb92-kube-api-access-ltnf6\") pod \"redhat-operators-c2d85\" (UID: \"f3ede25d-6d79-44f7-a853-88b36723eb92\") " pod="openshift-marketplace/redhat-operators-c2d85" Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.130262 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.130329 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5b2583f-85ab-4e14-b053-3396a1c20c12-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b5b2583f-85ab-4e14-b053-3396a1c20c12\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.130350 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3ede25d-6d79-44f7-a853-88b36723eb92-catalog-content\") pod \"redhat-operators-c2d85\" (UID: \"f3ede25d-6d79-44f7-a853-88b36723eb92\") " pod="openshift-marketplace/redhat-operators-c2d85" Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.130372 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5b2583f-85ab-4e14-b053-3396a1c20c12-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b5b2583f-85ab-4e14-b053-3396a1c20c12\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.130447 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5b2583f-85ab-4e14-b053-3396a1c20c12-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b5b2583f-85ab-4e14-b053-3396a1c20c12\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.130682 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3ede25d-6d79-44f7-a853-88b36723eb92-utilities\") pod \"redhat-operators-c2d85\" (UID: \"f3ede25d-6d79-44f7-a853-88b36723eb92\") " pod="openshift-marketplace/redhat-operators-c2d85" Jan 31 03:49:23 crc kubenswrapper[4827]: E0131 03:49:23.130755 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:23.630735011 +0000 UTC m=+156.317815480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.130813 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3ede25d-6d79-44f7-a853-88b36723eb92-catalog-content\") pod \"redhat-operators-c2d85\" (UID: \"f3ede25d-6d79-44f7-a853-88b36723eb92\") " pod="openshift-marketplace/redhat-operators-c2d85" Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.147559 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5b2583f-85ab-4e14-b053-3396a1c20c12-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b5b2583f-85ab-4e14-b053-3396a1c20c12\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.148079 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltnf6\" (UniqueName: \"kubernetes.io/projected/f3ede25d-6d79-44f7-a853-88b36723eb92-kube-api-access-ltnf6\") pod \"redhat-operators-c2d85\" (UID: \"f3ede25d-6d79-44f7-a853-88b36723eb92\") " pod="openshift-marketplace/redhat-operators-c2d85" Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.209102 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.233534 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:23 crc kubenswrapper[4827]: E0131 03:49:23.233770 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:23.733738746 +0000 UTC m=+156.420819235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.234079 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:23 crc kubenswrapper[4827]: E0131 03:49:23.234599 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:23.734584764 +0000 UTC m=+156.421665243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.247296 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c2d85" Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.334823 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:23 crc kubenswrapper[4827]: E0131 03:49:23.335109 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:23.835073005 +0000 UTC m=+156.522153494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.335229 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:23 crc kubenswrapper[4827]: E0131 03:49:23.335511 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:23.835496069 +0000 UTC m=+156.522576518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.436278 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:23 crc kubenswrapper[4827]: E0131 03:49:23.436694 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:23.936669103 +0000 UTC m=+156.623749552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.438156 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:23 crc kubenswrapper[4827]: E0131 03:49:23.439029 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:23.939013291 +0000 UTC m=+156.626093760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.540012 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:23 crc kubenswrapper[4827]: E0131 03:49:23.541510 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:24.041457457 +0000 UTC m=+156.728537906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.553859 4827 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.587527 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c2d85"] Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.642463 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:23 crc kubenswrapper[4827]: E0131 03:49:23.643210 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:24.143172619 +0000 UTC m=+156.830253088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.725521 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"31cfbd2b-aaa8-4644-9cf5-9271189cf0bd","Type":"ContainerStarted","Data":"2dcb56db49fc726db0edeaeb3f5d213b33f8da8f0e8c367af31b4c4567431efc"} Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.726496 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lngw" event={"ID":"e357c738-a2f2-49a3-b122-5fe5ab45b919","Type":"ContainerStarted","Data":"b36482728ccf239567155a89ae212178c858aa59cf05d460d873a603eb6f6e38"} Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.728231 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wr7t4" event={"ID":"273683f4-0b94-44d7-83a2-b540f4d5d81d","Type":"ContainerStarted","Data":"73970bda2501806f90281e64e2932d9cc9a3e1d5dd25e3e6318d80c8059be53d"} Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.729397 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4gn2" event={"ID":"94e2d804-29e9-4233-adda-45072b493f0f","Type":"ContainerStarted","Data":"b01839ac4012411ed14660660a512b735d03152b7aa276dafcdcb036a45cc2f4"} Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.730420 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2d85" event={"ID":"f3ede25d-6d79-44f7-a853-88b36723eb92","Type":"ContainerStarted","Data":"4401e7f4ca659e7c4112aabef42d73d4cb523979d1a46f20c0a8dcb04098ab63"} Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.743242 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:23 crc kubenswrapper[4827]: E0131 03:49:23.743456 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:24.243433452 +0000 UTC m=+156.930513901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.743594 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:23 crc kubenswrapper[4827]: E0131 03:49:23.744015 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:24.244002181 +0000 UTC m=+156.931082630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.747475 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.844616 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:23 crc kubenswrapper[4827]: E0131 03:49:23.845040 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:24.34502393 +0000 UTC m=+157.032104379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:23 crc kubenswrapper[4827]: W0131 03:49:23.875499 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb5b2583f_85ab_4e14_b053_3396a1c20c12.slice/crio-5069ee76ef553eca68d2f940d433671c03b13a55a1c2a4e7379d49bdcf5d4f3c WatchSource:0}: Error finding container 5069ee76ef553eca68d2f940d433671c03b13a55a1c2a4e7379d49bdcf5d4f3c: Status 404 returned error can't find the container with id 5069ee76ef553eca68d2f940d433671c03b13a55a1c2a4e7379d49bdcf5d4f3c Jan 31 03:49:23 crc kubenswrapper[4827]: I0131 03:49:23.946322 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:23 crc kubenswrapper[4827]: E0131 03:49:23.946864 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:24.446840295 +0000 UTC m=+157.133920764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:24 crc kubenswrapper[4827]: I0131 03:49:24.048723 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:24 crc kubenswrapper[4827]: E0131 03:49:24.049230 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:24.549208819 +0000 UTC m=+157.236289278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:24 crc kubenswrapper[4827]: I0131 03:49:24.150522 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:24 crc kubenswrapper[4827]: E0131 03:49:24.151118 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:24.651093286 +0000 UTC m=+157.338173745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:24 crc kubenswrapper[4827]: I0131 03:49:24.252716 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:24 crc kubenswrapper[4827]: E0131 03:49:24.252979 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:24.752945002 +0000 UTC m=+157.440025461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:24 crc kubenswrapper[4827]: I0131 03:49:24.253494 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:24 crc kubenswrapper[4827]: E0131 03:49:24.253952 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:24.753936226 +0000 UTC m=+157.441016675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:24 crc kubenswrapper[4827]: I0131 03:49:24.355219 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:24 crc kubenswrapper[4827]: E0131 03:49:24.355439 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:24.855406099 +0000 UTC m=+157.542486548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:24 crc kubenswrapper[4827]: I0131 03:49:24.355752 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:24 crc kubenswrapper[4827]: E0131 03:49:24.356203 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:49:24.856184415 +0000 UTC m=+157.543264854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s7psb" (UID: "04eac770-8ff7-453b-a1da-b028636b909c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:24 crc kubenswrapper[4827]: I0131 03:49:24.405489 4827 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-31T03:49:23.553917021Z","Handler":null,"Name":""} Jan 31 03:49:24 crc kubenswrapper[4827]: I0131 03:49:24.408509 4827 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 31 03:49:24 crc kubenswrapper[4827]: I0131 03:49:24.408565 4827 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 31 03:49:24 crc kubenswrapper[4827]: I0131 03:49:24.457143 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:24 crc kubenswrapper[4827]: I0131 03:49:24.463530 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 03:49:24 crc kubenswrapper[4827]: I0131 03:49:24.558553 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:24 crc kubenswrapper[4827]: I0131 03:49:24.736233 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b5b2583f-85ab-4e14-b053-3396a1c20c12","Type":"ContainerStarted","Data":"5069ee76ef553eca68d2f940d433671c03b13a55a1c2a4e7379d49bdcf5d4f3c"} Jan 31 03:49:24 crc kubenswrapper[4827]: I0131 03:49:24.738034 4827 generic.go:334] "Generic (PLEG): container finished" podID="273683f4-0b94-44d7-83a2-b540f4d5d81d" containerID="73970bda2501806f90281e64e2932d9cc9a3e1d5dd25e3e6318d80c8059be53d" exitCode=0 Jan 31 03:49:24 crc kubenswrapper[4827]: I0131 03:49:24.738203 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wr7t4" event={"ID":"273683f4-0b94-44d7-83a2-b540f4d5d81d","Type":"ContainerDied","Data":"73970bda2501806f90281e64e2932d9cc9a3e1d5dd25e3e6318d80c8059be53d"} Jan 31 03:49:24 crc kubenswrapper[4827]: I0131 03:49:24.739747 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 03:49:24 crc kubenswrapper[4827]: I0131 03:49:24.794649 4827 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 03:49:24 crc kubenswrapper[4827]: I0131 03:49:24.794715 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:24 crc kubenswrapper[4827]: I0131 03:49:24.819189 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s7psb\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:24 crc kubenswrapper[4827]: I0131 03:49:24.949587 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:25 crc kubenswrapper[4827]: I0131 03:49:25.175919 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s7psb"] Jan 31 03:49:25 crc kubenswrapper[4827]: W0131 03:49:25.217164 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04eac770_8ff7_453b_a1da_b028636b909c.slice/crio-bbc795f00e60e11f74cfe9af05a677d8fe31b784b77d37b8acdfcf976338658b WatchSource:0}: Error finding container bbc795f00e60e11f74cfe9af05a677d8fe31b784b77d37b8acdfcf976338658b: Status 404 returned error can't find the container with id bbc795f00e60e11f74cfe9af05a677d8fe31b784b77d37b8acdfcf976338658b Jan 31 03:49:25 crc kubenswrapper[4827]: I0131 03:49:25.713963 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-zhrf7" Jan 31 03:49:25 crc kubenswrapper[4827]: I0131 03:49:25.750249 4827 generic.go:334] "Generic (PLEG): container finished" podID="b53b07cf-d0d5-4774-89fe-89765537cc9b" containerID="ee7bac41be598005dee07f1fd57a55f74ec8848805fe784d9c493917bc01c264" exitCode=0 Jan 31 03:49:25 crc kubenswrapper[4827]: I0131 03:49:25.750297 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpjgt" event={"ID":"b53b07cf-d0d5-4774-89fe-89765537cc9b","Type":"ContainerDied","Data":"ee7bac41be598005dee07f1fd57a55f74ec8848805fe784d9c493917bc01c264"} Jan 31 03:49:25 crc kubenswrapper[4827]: I0131 03:49:25.758488 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" event={"ID":"04eac770-8ff7-453b-a1da-b028636b909c","Type":"ContainerStarted","Data":"bbc795f00e60e11f74cfe9af05a677d8fe31b784b77d37b8acdfcf976338658b"} Jan 31 03:49:26 crc kubenswrapper[4827]: I0131 03:49:26.132945 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 31 03:49:26 crc kubenswrapper[4827]: I0131 03:49:26.133831 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:26 crc kubenswrapper[4827]: I0131 03:49:26.765801 4827 generic.go:334] "Generic (PLEG): container finished" podID="94e2d804-29e9-4233-adda-45072b493f0f" containerID="eef2916ca929e315120ff7eb110a19d7c9532941368919484ac1b546a4be9386" exitCode=0 Jan 31 03:49:26 crc kubenswrapper[4827]: I0131 03:49:26.765915 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4gn2" event={"ID":"94e2d804-29e9-4233-adda-45072b493f0f","Type":"ContainerDied","Data":"eef2916ca929e315120ff7eb110a19d7c9532941368919484ac1b546a4be9386"} Jan 31 03:49:26 crc kubenswrapper[4827]: I0131 03:49:26.769771 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h44q" event={"ID":"9cf25edb-a1c1-4077-97dc-8f87363a8d4e","Type":"ContainerStarted","Data":"4a602e63d67aa267876484dc6326e3040dafca73624c015cdd2b344053c12df9"} Jan 31 03:49:26 crc kubenswrapper[4827]: I0131 03:49:26.771387 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" event={"ID":"04eac770-8ff7-453b-a1da-b028636b909c","Type":"ContainerStarted","Data":"9e72f39420d4238f46b07310e10fa60284a81c8bc89f943ac5a01b34249092c9"} Jan 31 03:49:26 crc kubenswrapper[4827]: I0131 03:49:26.772342 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:26 crc kubenswrapper[4827]: I0131 03:49:26.773645 4827 generic.go:334] "Generic (PLEG): container finished" podID="f3ede25d-6d79-44f7-a853-88b36723eb92" containerID="c25518e1489adb33ab90a0334fb1369506341518ad3a608423bf26201789e099" exitCode=0 Jan 31 03:49:26 crc kubenswrapper[4827]: I0131 03:49:26.773716 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2d85" event={"ID":"f3ede25d-6d79-44f7-a853-88b36723eb92","Type":"ContainerDied","Data":"c25518e1489adb33ab90a0334fb1369506341518ad3a608423bf26201789e099"} Jan 31 03:49:26 crc kubenswrapper[4827]: I0131 03:49:26.780793 4827 generic.go:334] "Generic (PLEG): container finished" podID="31cfbd2b-aaa8-4644-9cf5-9271189cf0bd" containerID="2dcb56db49fc726db0edeaeb3f5d213b33f8da8f0e8c367af31b4c4567431efc" exitCode=0 Jan 31 03:49:26 crc kubenswrapper[4827]: I0131 03:49:26.781021 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"31cfbd2b-aaa8-4644-9cf5-9271189cf0bd","Type":"ContainerDied","Data":"2dcb56db49fc726db0edeaeb3f5d213b33f8da8f0e8c367af31b4c4567431efc"} Jan 31 03:49:26 crc kubenswrapper[4827]: I0131 03:49:26.783300 4827 generic.go:334] "Generic (PLEG): container finished" podID="e357c738-a2f2-49a3-b122-5fe5ab45b919" containerID="fcd58d253e0b8db6249e4558617cad1b2d2180c78f508194665a8a8928e4eb88" exitCode=0 Jan 31 03:49:26 crc kubenswrapper[4827]: I0131 03:49:26.783367 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lngw" event={"ID":"e357c738-a2f2-49a3-b122-5fe5ab45b919","Type":"ContainerDied","Data":"fcd58d253e0b8db6249e4558617cad1b2d2180c78f508194665a8a8928e4eb88"} Jan 31 03:49:26 crc kubenswrapper[4827]: I0131 03:49:26.784623 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b5b2583f-85ab-4e14-b053-3396a1c20c12","Type":"ContainerStarted","Data":"479a48823576995802a9faf4f49cb8a8bed79891fc4cb61ae667c2616160a486"} Jan 31 03:49:26 crc kubenswrapper[4827]: I0131 03:49:26.804827 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" podStartSLOduration=137.804806558 podStartE2EDuration="2m17.804806558s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:26.803316287 +0000 UTC m=+159.490396766" watchObservedRunningTime="2026-01-31 03:49:26.804806558 +0000 UTC m=+159.491887007" Jan 31 03:49:27 crc kubenswrapper[4827]: I0131 03:49:27.722252 4827 patch_prober.go:28] interesting pod/downloads-7954f5f757-sgzkm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 03:49:27 crc kubenswrapper[4827]: I0131 03:49:27.723192 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-sgzkm" podUID="6e659a59-19ab-4c91-98ec-db3042ac1d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 03:49:27 crc kubenswrapper[4827]: I0131 03:49:27.722697 4827 patch_prober.go:28] interesting pod/downloads-7954f5f757-sgzkm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 03:49:27 crc kubenswrapper[4827]: I0131 03:49:27.723251 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sgzkm" podUID="6e659a59-19ab-4c91-98ec-db3042ac1d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 03:49:27 crc kubenswrapper[4827]: I0131 03:49:27.793944 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h44q" event={"ID":"9cf25edb-a1c1-4077-97dc-8f87363a8d4e","Type":"ContainerStarted","Data":"08a0c168746e9e315a843aefa5a31cbc8ee34818059e7d763f5a0e26419af8ae"} Jan 31 03:49:27 crc kubenswrapper[4827]: I0131 03:49:27.796282 4827 generic.go:334] "Generic (PLEG): container finished" podID="b5b2583f-85ab-4e14-b053-3396a1c20c12" containerID="479a48823576995802a9faf4f49cb8a8bed79891fc4cb61ae667c2616160a486" exitCode=0 Jan 31 03:49:27 crc kubenswrapper[4827]: I0131 03:49:27.798469 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b5b2583f-85ab-4e14-b053-3396a1c20c12","Type":"ContainerDied","Data":"479a48823576995802a9faf4f49cb8a8bed79891fc4cb61ae667c2616160a486"} Jan 31 03:49:27 crc kubenswrapper[4827]: I0131 03:49:27.824825 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=5.82480179 podStartE2EDuration="5.82480179s" podCreationTimestamp="2026-01-31 03:49:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:26.931065945 +0000 UTC m=+159.618146414" watchObservedRunningTime="2026-01-31 03:49:27.82480179 +0000 UTC m=+160.511882249" Jan 31 03:49:27 crc kubenswrapper[4827]: I0131 03:49:27.836092 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-7h44q" podStartSLOduration=20.836068334 podStartE2EDuration="20.836068334s" podCreationTimestamp="2026-01-31 03:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:27.817047552 +0000 UTC m=+160.504128011" watchObservedRunningTime="2026-01-31 03:49:27.836068334 +0000 UTC m=+160.523148783" Jan 31 03:49:28 crc kubenswrapper[4827]: I0131 03:49:28.115947 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 03:49:28 crc kubenswrapper[4827]: I0131 03:49:28.210355 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31cfbd2b-aaa8-4644-9cf5-9271189cf0bd-kube-api-access\") pod \"31cfbd2b-aaa8-4644-9cf5-9271189cf0bd\" (UID: \"31cfbd2b-aaa8-4644-9cf5-9271189cf0bd\") " Jan 31 03:49:28 crc kubenswrapper[4827]: I0131 03:49:28.210493 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31cfbd2b-aaa8-4644-9cf5-9271189cf0bd-kubelet-dir\") pod \"31cfbd2b-aaa8-4644-9cf5-9271189cf0bd\" (UID: \"31cfbd2b-aaa8-4644-9cf5-9271189cf0bd\") " Jan 31 03:49:28 crc kubenswrapper[4827]: I0131 03:49:28.210794 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31cfbd2b-aaa8-4644-9cf5-9271189cf0bd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "31cfbd2b-aaa8-4644-9cf5-9271189cf0bd" (UID: "31cfbd2b-aaa8-4644-9cf5-9271189cf0bd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:49:28 crc kubenswrapper[4827]: I0131 03:49:28.222189 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31cfbd2b-aaa8-4644-9cf5-9271189cf0bd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "31cfbd2b-aaa8-4644-9cf5-9271189cf0bd" (UID: "31cfbd2b-aaa8-4644-9cf5-9271189cf0bd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:49:28 crc kubenswrapper[4827]: I0131 03:49:28.312493 4827 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31cfbd2b-aaa8-4644-9cf5-9271189cf0bd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 03:49:28 crc kubenswrapper[4827]: I0131 03:49:28.312526 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31cfbd2b-aaa8-4644-9cf5-9271189cf0bd-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 03:49:28 crc kubenswrapper[4827]: I0131 03:49:28.806567 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"31cfbd2b-aaa8-4644-9cf5-9271189cf0bd","Type":"ContainerDied","Data":"674c7f3eeae9efe37a1bb06f826e68cf4b881cddfb2cbc2e911d147ab956394c"} Jan 31 03:49:28 crc kubenswrapper[4827]: I0131 03:49:28.806625 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="674c7f3eeae9efe37a1bb06f826e68cf4b881cddfb2cbc2e911d147ab956394c" Jan 31 03:49:28 crc kubenswrapper[4827]: I0131 03:49:28.806706 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 03:49:29 crc kubenswrapper[4827]: I0131 03:49:29.166829 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 03:49:29 crc kubenswrapper[4827]: I0131 03:49:29.224558 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5b2583f-85ab-4e14-b053-3396a1c20c12-kubelet-dir\") pod \"b5b2583f-85ab-4e14-b053-3396a1c20c12\" (UID: \"b5b2583f-85ab-4e14-b053-3396a1c20c12\") " Jan 31 03:49:29 crc kubenswrapper[4827]: I0131 03:49:29.224618 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5b2583f-85ab-4e14-b053-3396a1c20c12-kube-api-access\") pod \"b5b2583f-85ab-4e14-b053-3396a1c20c12\" (UID: \"b5b2583f-85ab-4e14-b053-3396a1c20c12\") " Jan 31 03:49:29 crc kubenswrapper[4827]: I0131 03:49:29.224967 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5b2583f-85ab-4e14-b053-3396a1c20c12-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b5b2583f-85ab-4e14-b053-3396a1c20c12" (UID: "b5b2583f-85ab-4e14-b053-3396a1c20c12"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:49:29 crc kubenswrapper[4827]: I0131 03:49:29.228426 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5b2583f-85ab-4e14-b053-3396a1c20c12-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b5b2583f-85ab-4e14-b053-3396a1c20c12" (UID: "b5b2583f-85ab-4e14-b053-3396a1c20c12"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:49:29 crc kubenswrapper[4827]: I0131 03:49:29.326826 4827 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5b2583f-85ab-4e14-b053-3396a1c20c12-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 03:49:29 crc kubenswrapper[4827]: I0131 03:49:29.326867 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5b2583f-85ab-4e14-b053-3396a1c20c12-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 03:49:29 crc kubenswrapper[4827]: I0131 03:49:29.596715 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:29 crc kubenswrapper[4827]: I0131 03:49:29.602297 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:49:29 crc kubenswrapper[4827]: I0131 03:49:29.813551 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b5b2583f-85ab-4e14-b053-3396a1c20c12","Type":"ContainerDied","Data":"5069ee76ef553eca68d2f940d433671c03b13a55a1c2a4e7379d49bdcf5d4f3c"} Jan 31 03:49:29 crc kubenswrapper[4827]: I0131 03:49:29.813613 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5069ee76ef553eca68d2f940d433671c03b13a55a1c2a4e7379d49bdcf5d4f3c" Jan 31 03:49:29 crc kubenswrapper[4827]: I0131 03:49:29.813583 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 03:49:31 crc kubenswrapper[4827]: I0131 03:49:31.606544 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mkhcp"] Jan 31 03:49:31 crc kubenswrapper[4827]: I0131 03:49:31.608204 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" podUID="7bafc4cb-e5b7-4b39-9930-b885e403dfca" containerName="controller-manager" containerID="cri-o://4347637921ce370403b003b8740cb9173f2157c3db9d3514df8976509c34a741" gracePeriod=30 Jan 31 03:49:31 crc kubenswrapper[4827]: I0131 03:49:31.632895 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m"] Jan 31 03:49:31 crc kubenswrapper[4827]: I0131 03:49:31.633125 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" podUID="893053b3-df21-4683-a51a-bf12b3bed27d" containerName="route-controller-manager" containerID="cri-o://d306765b467d8f21b503a07a80398acb56f84225397694be9390048043d6fca9" gracePeriod=30 Jan 31 03:49:31 crc kubenswrapper[4827]: I0131 03:49:31.826552 4827 generic.go:334] "Generic (PLEG): container finished" podID="7bafc4cb-e5b7-4b39-9930-b885e403dfca" containerID="4347637921ce370403b003b8740cb9173f2157c3db9d3514df8976509c34a741" exitCode=0 Jan 31 03:49:31 crc kubenswrapper[4827]: I0131 03:49:31.826904 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" event={"ID":"7bafc4cb-e5b7-4b39-9930-b885e403dfca","Type":"ContainerDied","Data":"4347637921ce370403b003b8740cb9173f2157c3db9d3514df8976509c34a741"} Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.170919 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs\") pod \"network-metrics-daemon-2shng\" (UID: \"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\") " pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.179583 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf80ec31-1f83-4ed6-84e3-055cf9c88bff-metrics-certs\") pod \"network-metrics-daemon-2shng\" (UID: \"cf80ec31-1f83-4ed6-84e3-055cf9c88bff\") " pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.445649 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2shng" Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.640326 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2shng"] Jan 31 03:49:32 crc kubenswrapper[4827]: W0131 03:49:32.649348 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf80ec31_1f83_4ed6_84e3_055cf9c88bff.slice/crio-e0affbd85a390001cb0ab42212c9a07476ddbd0ec21e35d4388c00c1a5561821 WatchSource:0}: Error finding container e0affbd85a390001cb0ab42212c9a07476ddbd0ec21e35d4388c00c1a5561821: Status 404 returned error can't find the container with id e0affbd85a390001cb0ab42212c9a07476ddbd0ec21e35d4388c00c1a5561821 Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.837744 4827 generic.go:334] "Generic (PLEG): container finished" podID="893053b3-df21-4683-a51a-bf12b3bed27d" containerID="d306765b467d8f21b503a07a80398acb56f84225397694be9390048043d6fca9" exitCode=0 Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.837833 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" event={"ID":"893053b3-df21-4683-a51a-bf12b3bed27d","Type":"ContainerDied","Data":"d306765b467d8f21b503a07a80398acb56f84225397694be9390048043d6fca9"} Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.837901 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" event={"ID":"893053b3-df21-4683-a51a-bf12b3bed27d","Type":"ContainerDied","Data":"d09b5b734c9b5f01cb62f338caa13724323ef31c7cfb1303acb26bda5f0afe2a"} Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.837916 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d09b5b734c9b5f01cb62f338caa13724323ef31c7cfb1303acb26bda5f0afe2a" Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.839893 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.844117 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2shng" event={"ID":"cf80ec31-1f83-4ed6-84e3-055cf9c88bff","Type":"ContainerStarted","Data":"e0affbd85a390001cb0ab42212c9a07476ddbd0ec21e35d4388c00c1a5561821"} Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.879413 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh"] Jan 31 03:49:32 crc kubenswrapper[4827]: E0131 03:49:32.879759 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893053b3-df21-4683-a51a-bf12b3bed27d" containerName="route-controller-manager" Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.879776 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="893053b3-df21-4683-a51a-bf12b3bed27d" containerName="route-controller-manager" Jan 31 03:49:32 crc kubenswrapper[4827]: E0131 03:49:32.879789 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31cfbd2b-aaa8-4644-9cf5-9271189cf0bd" containerName="pruner" Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.879796 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="31cfbd2b-aaa8-4644-9cf5-9271189cf0bd" containerName="pruner" Jan 31 03:49:32 crc kubenswrapper[4827]: E0131 03:49:32.879818 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b2583f-85ab-4e14-b053-3396a1c20c12" containerName="pruner" Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.879826 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b2583f-85ab-4e14-b053-3396a1c20c12" containerName="pruner" Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.879938 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="893053b3-df21-4683-a51a-bf12b3bed27d" containerName="route-controller-manager" Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.879950 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="31cfbd2b-aaa8-4644-9cf5-9271189cf0bd" containerName="pruner" Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.879961 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5b2583f-85ab-4e14-b053-3396a1c20c12" containerName="pruner" Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.880519 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/893053b3-df21-4683-a51a-bf12b3bed27d-client-ca\") pod \"893053b3-df21-4683-a51a-bf12b3bed27d\" (UID: \"893053b3-df21-4683-a51a-bf12b3bed27d\") " Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.880590 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/893053b3-df21-4683-a51a-bf12b3bed27d-serving-cert\") pod \"893053b3-df21-4683-a51a-bf12b3bed27d\" (UID: \"893053b3-df21-4683-a51a-bf12b3bed27d\") " Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.880722 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/893053b3-df21-4683-a51a-bf12b3bed27d-config\") pod \"893053b3-df21-4683-a51a-bf12b3bed27d\" (UID: \"893053b3-df21-4683-a51a-bf12b3bed27d\") " Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.880755 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2gdp\" (UniqueName: \"kubernetes.io/projected/893053b3-df21-4683-a51a-bf12b3bed27d-kube-api-access-z2gdp\") pod \"893053b3-df21-4683-a51a-bf12b3bed27d\" (UID: \"893053b3-df21-4683-a51a-bf12b3bed27d\") " Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.881638 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/893053b3-df21-4683-a51a-bf12b3bed27d-config" (OuterVolumeSpecName: "config") pod "893053b3-df21-4683-a51a-bf12b3bed27d" (UID: "893053b3-df21-4683-a51a-bf12b3bed27d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.881685 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.882132 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/893053b3-df21-4683-a51a-bf12b3bed27d-client-ca" (OuterVolumeSpecName: "client-ca") pod "893053b3-df21-4683-a51a-bf12b3bed27d" (UID: "893053b3-df21-4683-a51a-bf12b3bed27d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.885749 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh"] Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.887089 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/893053b3-df21-4683-a51a-bf12b3bed27d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "893053b3-df21-4683-a51a-bf12b3bed27d" (UID: "893053b3-df21-4683-a51a-bf12b3bed27d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.896656 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/893053b3-df21-4683-a51a-bf12b3bed27d-kube-api-access-z2gdp" (OuterVolumeSpecName: "kube-api-access-z2gdp") pod "893053b3-df21-4683-a51a-bf12b3bed27d" (UID: "893053b3-df21-4683-a51a-bf12b3bed27d"). InnerVolumeSpecName "kube-api-access-z2gdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:49:32 crc kubenswrapper[4827]: I0131 03:49:32.954489 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:32.982481 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wthzr\" (UniqueName: \"kubernetes.io/projected/90c59ae7-cadc-4d83-9a27-45619b5f3659-kube-api-access-wthzr\") pod \"route-controller-manager-c65bffc45-hqmdh\" (UID: \"90c59ae7-cadc-4d83-9a27-45619b5f3659\") " pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:32.982531 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90c59ae7-cadc-4d83-9a27-45619b5f3659-serving-cert\") pod \"route-controller-manager-c65bffc45-hqmdh\" (UID: \"90c59ae7-cadc-4d83-9a27-45619b5f3659\") " pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:32.982570 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90c59ae7-cadc-4d83-9a27-45619b5f3659-config\") pod \"route-controller-manager-c65bffc45-hqmdh\" (UID: \"90c59ae7-cadc-4d83-9a27-45619b5f3659\") " pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:32.982594 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90c59ae7-cadc-4d83-9a27-45619b5f3659-client-ca\") pod \"route-controller-manager-c65bffc45-hqmdh\" (UID: \"90c59ae7-cadc-4d83-9a27-45619b5f3659\") " pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:32.982629 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/893053b3-df21-4683-a51a-bf12b3bed27d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:32.982640 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/893053b3-df21-4683-a51a-bf12b3bed27d-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:32.982649 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2gdp\" (UniqueName: \"kubernetes.io/projected/893053b3-df21-4683-a51a-bf12b3bed27d-kube-api-access-z2gdp\") on node \"crc\" DevicePath \"\"" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:32.982658 4827 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/893053b3-df21-4683-a51a-bf12b3bed27d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.083236 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bafc4cb-e5b7-4b39-9930-b885e403dfca-serving-cert\") pod \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\" (UID: \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\") " Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.083314 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bafc4cb-e5b7-4b39-9930-b885e403dfca-config\") pod \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\" (UID: \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\") " Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.083386 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7bafc4cb-e5b7-4b39-9930-b885e403dfca-proxy-ca-bundles\") pod \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\" (UID: \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\") " Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.083424 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6x8k\" (UniqueName: \"kubernetes.io/projected/7bafc4cb-e5b7-4b39-9930-b885e403dfca-kube-api-access-x6x8k\") pod \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\" (UID: \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\") " Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.083478 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7bafc4cb-e5b7-4b39-9930-b885e403dfca-client-ca\") pod \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\" (UID: \"7bafc4cb-e5b7-4b39-9930-b885e403dfca\") " Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.083701 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wthzr\" (UniqueName: \"kubernetes.io/projected/90c59ae7-cadc-4d83-9a27-45619b5f3659-kube-api-access-wthzr\") pod \"route-controller-manager-c65bffc45-hqmdh\" (UID: \"90c59ae7-cadc-4d83-9a27-45619b5f3659\") " pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.083730 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90c59ae7-cadc-4d83-9a27-45619b5f3659-serving-cert\") pod \"route-controller-manager-c65bffc45-hqmdh\" (UID: \"90c59ae7-cadc-4d83-9a27-45619b5f3659\") " pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.083763 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90c59ae7-cadc-4d83-9a27-45619b5f3659-config\") pod \"route-controller-manager-c65bffc45-hqmdh\" (UID: \"90c59ae7-cadc-4d83-9a27-45619b5f3659\") " pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.083786 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90c59ae7-cadc-4d83-9a27-45619b5f3659-client-ca\") pod \"route-controller-manager-c65bffc45-hqmdh\" (UID: \"90c59ae7-cadc-4d83-9a27-45619b5f3659\") " pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.084749 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bafc4cb-e5b7-4b39-9930-b885e403dfca-client-ca" (OuterVolumeSpecName: "client-ca") pod "7bafc4cb-e5b7-4b39-9930-b885e403dfca" (UID: "7bafc4cb-e5b7-4b39-9930-b885e403dfca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.085279 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bafc4cb-e5b7-4b39-9930-b885e403dfca-config" (OuterVolumeSpecName: "config") pod "7bafc4cb-e5b7-4b39-9930-b885e403dfca" (UID: "7bafc4cb-e5b7-4b39-9930-b885e403dfca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.085510 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90c59ae7-cadc-4d83-9a27-45619b5f3659-client-ca\") pod \"route-controller-manager-c65bffc45-hqmdh\" (UID: \"90c59ae7-cadc-4d83-9a27-45619b5f3659\") " pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.085766 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bafc4cb-e5b7-4b39-9930-b885e403dfca-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7bafc4cb-e5b7-4b39-9930-b885e403dfca" (UID: "7bafc4cb-e5b7-4b39-9930-b885e403dfca"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.087135 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90c59ae7-cadc-4d83-9a27-45619b5f3659-config\") pod \"route-controller-manager-c65bffc45-hqmdh\" (UID: \"90c59ae7-cadc-4d83-9a27-45619b5f3659\") " pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.089865 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90c59ae7-cadc-4d83-9a27-45619b5f3659-serving-cert\") pod \"route-controller-manager-c65bffc45-hqmdh\" (UID: \"90c59ae7-cadc-4d83-9a27-45619b5f3659\") " pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.090377 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bafc4cb-e5b7-4b39-9930-b885e403dfca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7bafc4cb-e5b7-4b39-9930-b885e403dfca" (UID: "7bafc4cb-e5b7-4b39-9930-b885e403dfca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.090764 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bafc4cb-e5b7-4b39-9930-b885e403dfca-kube-api-access-x6x8k" (OuterVolumeSpecName: "kube-api-access-x6x8k") pod "7bafc4cb-e5b7-4b39-9930-b885e403dfca" (UID: "7bafc4cb-e5b7-4b39-9930-b885e403dfca"). InnerVolumeSpecName "kube-api-access-x6x8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.101343 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wthzr\" (UniqueName: \"kubernetes.io/projected/90c59ae7-cadc-4d83-9a27-45619b5f3659-kube-api-access-wthzr\") pod \"route-controller-manager-c65bffc45-hqmdh\" (UID: \"90c59ae7-cadc-4d83-9a27-45619b5f3659\") " pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.184759 4827 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7bafc4cb-e5b7-4b39-9930-b885e403dfca-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.185226 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bafc4cb-e5b7-4b39-9930-b885e403dfca-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.185237 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bafc4cb-e5b7-4b39-9930-b885e403dfca-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.185246 4827 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7bafc4cb-e5b7-4b39-9930-b885e403dfca-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.185257 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6x8k\" (UniqueName: \"kubernetes.io/projected/7bafc4cb-e5b7-4b39-9930-b885e403dfca-kube-api-access-x6x8k\") on node \"crc\" DevicePath \"\"" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.253751 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.430253 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh"] Jan 31 03:49:33 crc kubenswrapper[4827]: W0131 03:49:33.437360 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90c59ae7_cadc_4d83_9a27_45619b5f3659.slice/crio-20a4e2573f66cd96b5d674dff22636353b1e75527301f8a80565cb4feb223c55 WatchSource:0}: Error finding container 20a4e2573f66cd96b5d674dff22636353b1e75527301f8a80565cb4feb223c55: Status 404 returned error can't find the container with id 20a4e2573f66cd96b5d674dff22636353b1e75527301f8a80565cb4feb223c55 Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.855304 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.855315 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mkhcp" event={"ID":"7bafc4cb-e5b7-4b39-9930-b885e403dfca","Type":"ContainerDied","Data":"3fbdacc92bd0342b626ea0ee73d5c977e739940ba6f3932740b40d8f9574529c"} Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.855945 4827 scope.go:117] "RemoveContainer" containerID="4347637921ce370403b003b8740cb9173f2157c3db9d3514df8976509c34a741" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.858238 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" event={"ID":"90c59ae7-cadc-4d83-9a27-45619b5f3659","Type":"ContainerStarted","Data":"55d1d9671ad60fd255cc2e362ad73f621edde5985b4e5aab99e2ea97e12d5228"} Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.858312 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" event={"ID":"90c59ae7-cadc-4d83-9a27-45619b5f3659","Type":"ContainerStarted","Data":"20a4e2573f66cd96b5d674dff22636353b1e75527301f8a80565cb4feb223c55"} Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.860359 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2shng" event={"ID":"cf80ec31-1f83-4ed6-84e3-055cf9c88bff","Type":"ContainerStarted","Data":"880309de94233070904ebe1a811cf30f477563abf9f7696564d059f7c85a5d77"} Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.860431 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m" Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.897806 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m"] Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.904987 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-58p4m"] Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.914378 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mkhcp"] Jan 31 03:49:33 crc kubenswrapper[4827]: I0131 03:49:33.917344 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mkhcp"] Jan 31 03:49:34 crc kubenswrapper[4827]: I0131 03:49:34.118071 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bafc4cb-e5b7-4b39-9930-b885e403dfca" path="/var/lib/kubelet/pods/7bafc4cb-e5b7-4b39-9930-b885e403dfca/volumes" Jan 31 03:49:34 crc kubenswrapper[4827]: I0131 03:49:34.118615 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="893053b3-df21-4683-a51a-bf12b3bed27d" path="/var/lib/kubelet/pods/893053b3-df21-4683-a51a-bf12b3bed27d/volumes" Jan 31 03:49:34 crc kubenswrapper[4827]: I0131 03:49:34.869446 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2shng" event={"ID":"cf80ec31-1f83-4ed6-84e3-055cf9c88bff","Type":"ContainerStarted","Data":"75fd8309a3656c7d48bf4485e2f1fb826e89492e1bf91600d42190d436553018"} Jan 31 03:49:34 crc kubenswrapper[4827]: I0131 03:49:34.873621 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" Jan 31 03:49:34 crc kubenswrapper[4827]: I0131 03:49:34.884703 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-2shng" podStartSLOduration=145.884686398 podStartE2EDuration="2m25.884686398s" podCreationTimestamp="2026-01-31 03:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:34.882200055 +0000 UTC m=+167.569280504" watchObservedRunningTime="2026-01-31 03:49:34.884686398 +0000 UTC m=+167.571766847" Jan 31 03:49:34 crc kubenswrapper[4827]: I0131 03:49:34.885145 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" Jan 31 03:49:34 crc kubenswrapper[4827]: I0131 03:49:34.902613 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" podStartSLOduration=3.902597834 podStartE2EDuration="3.902597834s" podCreationTimestamp="2026-01-31 03:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:34.899715658 +0000 UTC m=+167.586796107" watchObservedRunningTime="2026-01-31 03:49:34.902597834 +0000 UTC m=+167.589678283" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.399843 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh"] Jan 31 03:49:35 crc kubenswrapper[4827]: E0131 03:49:35.400067 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bafc4cb-e5b7-4b39-9930-b885e403dfca" containerName="controller-manager" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.400078 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bafc4cb-e5b7-4b39-9930-b885e403dfca" containerName="controller-manager" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.400187 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bafc4cb-e5b7-4b39-9930-b885e403dfca" containerName="controller-manager" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.400628 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.403610 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.403825 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.403957 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.404059 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.404945 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.407037 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.431859 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh"] Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.434349 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.523898 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/115d98fe-a709-4ffa-9afd-47751519e331-proxy-ca-bundles\") pod \"controller-manager-84cc4c45f9-x8kdh\" (UID: \"115d98fe-a709-4ffa-9afd-47751519e331\") " pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.524250 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/115d98fe-a709-4ffa-9afd-47751519e331-config\") pod \"controller-manager-84cc4c45f9-x8kdh\" (UID: \"115d98fe-a709-4ffa-9afd-47751519e331\") " pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.524272 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/115d98fe-a709-4ffa-9afd-47751519e331-serving-cert\") pod \"controller-manager-84cc4c45f9-x8kdh\" (UID: \"115d98fe-a709-4ffa-9afd-47751519e331\") " pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.524300 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/115d98fe-a709-4ffa-9afd-47751519e331-client-ca\") pod \"controller-manager-84cc4c45f9-x8kdh\" (UID: \"115d98fe-a709-4ffa-9afd-47751519e331\") " pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.524330 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6sch\" (UniqueName: \"kubernetes.io/projected/115d98fe-a709-4ffa-9afd-47751519e331-kube-api-access-c6sch\") pod \"controller-manager-84cc4c45f9-x8kdh\" (UID: \"115d98fe-a709-4ffa-9afd-47751519e331\") " pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.625059 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/115d98fe-a709-4ffa-9afd-47751519e331-proxy-ca-bundles\") pod \"controller-manager-84cc4c45f9-x8kdh\" (UID: \"115d98fe-a709-4ffa-9afd-47751519e331\") " pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.625121 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/115d98fe-a709-4ffa-9afd-47751519e331-config\") pod \"controller-manager-84cc4c45f9-x8kdh\" (UID: \"115d98fe-a709-4ffa-9afd-47751519e331\") " pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.625141 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/115d98fe-a709-4ffa-9afd-47751519e331-serving-cert\") pod \"controller-manager-84cc4c45f9-x8kdh\" (UID: \"115d98fe-a709-4ffa-9afd-47751519e331\") " pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.625169 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/115d98fe-a709-4ffa-9afd-47751519e331-client-ca\") pod \"controller-manager-84cc4c45f9-x8kdh\" (UID: \"115d98fe-a709-4ffa-9afd-47751519e331\") " pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.625200 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6sch\" (UniqueName: \"kubernetes.io/projected/115d98fe-a709-4ffa-9afd-47751519e331-kube-api-access-c6sch\") pod \"controller-manager-84cc4c45f9-x8kdh\" (UID: \"115d98fe-a709-4ffa-9afd-47751519e331\") " pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.626451 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/115d98fe-a709-4ffa-9afd-47751519e331-client-ca\") pod \"controller-manager-84cc4c45f9-x8kdh\" (UID: \"115d98fe-a709-4ffa-9afd-47751519e331\") " pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.644156 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/115d98fe-a709-4ffa-9afd-47751519e331-serving-cert\") pod \"controller-manager-84cc4c45f9-x8kdh\" (UID: \"115d98fe-a709-4ffa-9afd-47751519e331\") " pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.644834 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/115d98fe-a709-4ffa-9afd-47751519e331-proxy-ca-bundles\") pod \"controller-manager-84cc4c45f9-x8kdh\" (UID: \"115d98fe-a709-4ffa-9afd-47751519e331\") " pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.647538 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/115d98fe-a709-4ffa-9afd-47751519e331-config\") pod \"controller-manager-84cc4c45f9-x8kdh\" (UID: \"115d98fe-a709-4ffa-9afd-47751519e331\") " pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.647690 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6sch\" (UniqueName: \"kubernetes.io/projected/115d98fe-a709-4ffa-9afd-47751519e331-kube-api-access-c6sch\") pod \"controller-manager-84cc4c45f9-x8kdh\" (UID: \"115d98fe-a709-4ffa-9afd-47751519e331\") " pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" Jan 31 03:49:35 crc kubenswrapper[4827]: I0131 03:49:35.724093 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" Jan 31 03:49:37 crc kubenswrapper[4827]: I0131 03:49:37.722044 4827 patch_prober.go:28] interesting pod/downloads-7954f5f757-sgzkm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 03:49:37 crc kubenswrapper[4827]: I0131 03:49:37.722082 4827 patch_prober.go:28] interesting pod/downloads-7954f5f757-sgzkm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 03:49:37 crc kubenswrapper[4827]: I0131 03:49:37.722113 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-sgzkm" podUID="6e659a59-19ab-4c91-98ec-db3042ac1d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 03:49:37 crc kubenswrapper[4827]: I0131 03:49:37.722136 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sgzkm" podUID="6e659a59-19ab-4c91-98ec-db3042ac1d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 03:49:37 crc kubenswrapper[4827]: I0131 03:49:37.722163 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-sgzkm" Jan 31 03:49:37 crc kubenswrapper[4827]: I0131 03:49:37.722836 4827 patch_prober.go:28] interesting pod/downloads-7954f5f757-sgzkm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 03:49:37 crc kubenswrapper[4827]: I0131 03:49:37.723106 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sgzkm" podUID="6e659a59-19ab-4c91-98ec-db3042ac1d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 03:49:37 crc kubenswrapper[4827]: I0131 03:49:37.723181 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"0ca47e3203274df4706f884ad4d32fda3bc312accf61c75580e5f769317077d6"} pod="openshift-console/downloads-7954f5f757-sgzkm" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 31 03:49:37 crc kubenswrapper[4827]: I0131 03:49:37.723264 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-sgzkm" podUID="6e659a59-19ab-4c91-98ec-db3042ac1d4b" containerName="download-server" containerID="cri-o://0ca47e3203274df4706f884ad4d32fda3bc312accf61c75580e5f769317077d6" gracePeriod=2 Jan 31 03:49:38 crc kubenswrapper[4827]: I0131 03:49:38.900223 4827 generic.go:334] "Generic (PLEG): container finished" podID="6e659a59-19ab-4c91-98ec-db3042ac1d4b" containerID="0ca47e3203274df4706f884ad4d32fda3bc312accf61c75580e5f769317077d6" exitCode=0 Jan 31 03:49:38 crc kubenswrapper[4827]: I0131 03:49:38.900275 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sgzkm" event={"ID":"6e659a59-19ab-4c91-98ec-db3042ac1d4b","Type":"ContainerDied","Data":"0ca47e3203274df4706f884ad4d32fda3bc312accf61c75580e5f769317077d6"} Jan 31 03:49:44 crc kubenswrapper[4827]: I0131 03:49:44.956088 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:49:47 crc kubenswrapper[4827]: I0131 03:49:47.372002 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 03:49:47 crc kubenswrapper[4827]: I0131 03:49:47.372096 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 03:49:47 crc kubenswrapper[4827]: I0131 03:49:47.722581 4827 patch_prober.go:28] interesting pod/downloads-7954f5f757-sgzkm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 03:49:47 crc kubenswrapper[4827]: I0131 03:49:47.722640 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sgzkm" podUID="6e659a59-19ab-4c91-98ec-db3042ac1d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 03:49:50 crc kubenswrapper[4827]: I0131 03:49:50.237691 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbsqf" Jan 31 03:49:51 crc kubenswrapper[4827]: I0131 03:49:51.554597 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh"] Jan 31 03:49:51 crc kubenswrapper[4827]: I0131 03:49:51.637307 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh"] Jan 31 03:49:51 crc kubenswrapper[4827]: I0131 03:49:51.637623 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" podUID="90c59ae7-cadc-4d83-9a27-45619b5f3659" containerName="route-controller-manager" containerID="cri-o://55d1d9671ad60fd255cc2e362ad73f621edde5985b4e5aab99e2ea97e12d5228" gracePeriod=30 Jan 31 03:49:52 crc kubenswrapper[4827]: I0131 03:49:52.010616 4827 generic.go:334] "Generic (PLEG): container finished" podID="90c59ae7-cadc-4d83-9a27-45619b5f3659" containerID="55d1d9671ad60fd255cc2e362ad73f621edde5985b4e5aab99e2ea97e12d5228" exitCode=0 Jan 31 03:49:52 crc kubenswrapper[4827]: I0131 03:49:52.010732 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" event={"ID":"90c59ae7-cadc-4d83-9a27-45619b5f3659","Type":"ContainerDied","Data":"55d1d9671ad60fd255cc2e362ad73f621edde5985b4e5aab99e2ea97e12d5228"} Jan 31 03:49:53 crc kubenswrapper[4827]: I0131 03:49:53.255645 4827 patch_prober.go:28] interesting pod/route-controller-manager-c65bffc45-hqmdh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Jan 31 03:49:53 crc kubenswrapper[4827]: I0131 03:49:53.255776 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" podUID="90c59ae7-cadc-4d83-9a27-45619b5f3659" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Jan 31 03:49:56 crc kubenswrapper[4827]: I0131 03:49:56.135600 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:57 crc kubenswrapper[4827]: E0131 03:49:57.218017 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea\": context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 03:49:57 crc kubenswrapper[4827]: E0131 03:49:57.218441 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfdxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8lngw_openshift-marketplace(e357c738-a2f2-49a3-b122-5fe5ab45b919): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea\": context canceled" logger="UnhandledError" Jan 31 03:49:57 crc kubenswrapper[4827]: E0131 03:49:57.219801 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea: Get \\\"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea\\\": context canceled\"" pod="openshift-marketplace/redhat-operators-8lngw" podUID="e357c738-a2f2-49a3-b122-5fe5ab45b919" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.228361 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.253288 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv"] Jan 31 03:49:57 crc kubenswrapper[4827]: E0131 03:49:57.253562 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90c59ae7-cadc-4d83-9a27-45619b5f3659" containerName="route-controller-manager" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.253578 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="90c59ae7-cadc-4d83-9a27-45619b5f3659" containerName="route-controller-manager" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.253696 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="90c59ae7-cadc-4d83-9a27-45619b5f3659" containerName="route-controller-manager" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.254110 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.292917 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv"] Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.315246 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90c59ae7-cadc-4d83-9a27-45619b5f3659-serving-cert\") pod \"90c59ae7-cadc-4d83-9a27-45619b5f3659\" (UID: \"90c59ae7-cadc-4d83-9a27-45619b5f3659\") " Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.315362 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90c59ae7-cadc-4d83-9a27-45619b5f3659-client-ca\") pod \"90c59ae7-cadc-4d83-9a27-45619b5f3659\" (UID: \"90c59ae7-cadc-4d83-9a27-45619b5f3659\") " Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.315391 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90c59ae7-cadc-4d83-9a27-45619b5f3659-config\") pod \"90c59ae7-cadc-4d83-9a27-45619b5f3659\" (UID: \"90c59ae7-cadc-4d83-9a27-45619b5f3659\") " Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.315448 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wthzr\" (UniqueName: \"kubernetes.io/projected/90c59ae7-cadc-4d83-9a27-45619b5f3659-kube-api-access-wthzr\") pod \"90c59ae7-cadc-4d83-9a27-45619b5f3659\" (UID: \"90c59ae7-cadc-4d83-9a27-45619b5f3659\") " Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.315697 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/416bad8a-e265-443e-b753-5025262786fe-client-ca\") pod \"route-controller-manager-7ccc7f67b8-65qfv\" (UID: \"416bad8a-e265-443e-b753-5025262786fe\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.315793 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6stmz\" (UniqueName: \"kubernetes.io/projected/416bad8a-e265-443e-b753-5025262786fe-kube-api-access-6stmz\") pod \"route-controller-manager-7ccc7f67b8-65qfv\" (UID: \"416bad8a-e265-443e-b753-5025262786fe\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.316271 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90c59ae7-cadc-4d83-9a27-45619b5f3659-client-ca" (OuterVolumeSpecName: "client-ca") pod "90c59ae7-cadc-4d83-9a27-45619b5f3659" (UID: "90c59ae7-cadc-4d83-9a27-45619b5f3659"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.316289 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90c59ae7-cadc-4d83-9a27-45619b5f3659-config" (OuterVolumeSpecName: "config") pod "90c59ae7-cadc-4d83-9a27-45619b5f3659" (UID: "90c59ae7-cadc-4d83-9a27-45619b5f3659"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.316378 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/416bad8a-e265-443e-b753-5025262786fe-serving-cert\") pod \"route-controller-manager-7ccc7f67b8-65qfv\" (UID: \"416bad8a-e265-443e-b753-5025262786fe\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.316422 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/416bad8a-e265-443e-b753-5025262786fe-config\") pod \"route-controller-manager-7ccc7f67b8-65qfv\" (UID: \"416bad8a-e265-443e-b753-5025262786fe\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.316482 4827 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90c59ae7-cadc-4d83-9a27-45619b5f3659-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.316497 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90c59ae7-cadc-4d83-9a27-45619b5f3659-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.323462 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90c59ae7-cadc-4d83-9a27-45619b5f3659-kube-api-access-wthzr" (OuterVolumeSpecName: "kube-api-access-wthzr") pod "90c59ae7-cadc-4d83-9a27-45619b5f3659" (UID: "90c59ae7-cadc-4d83-9a27-45619b5f3659"). InnerVolumeSpecName "kube-api-access-wthzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.323757 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90c59ae7-cadc-4d83-9a27-45619b5f3659-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "90c59ae7-cadc-4d83-9a27-45619b5f3659" (UID: "90c59ae7-cadc-4d83-9a27-45619b5f3659"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.417499 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/416bad8a-e265-443e-b753-5025262786fe-client-ca\") pod \"route-controller-manager-7ccc7f67b8-65qfv\" (UID: \"416bad8a-e265-443e-b753-5025262786fe\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.417560 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6stmz\" (UniqueName: \"kubernetes.io/projected/416bad8a-e265-443e-b753-5025262786fe-kube-api-access-6stmz\") pod \"route-controller-manager-7ccc7f67b8-65qfv\" (UID: \"416bad8a-e265-443e-b753-5025262786fe\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.417659 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/416bad8a-e265-443e-b753-5025262786fe-serving-cert\") pod \"route-controller-manager-7ccc7f67b8-65qfv\" (UID: \"416bad8a-e265-443e-b753-5025262786fe\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.417686 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/416bad8a-e265-443e-b753-5025262786fe-config\") pod \"route-controller-manager-7ccc7f67b8-65qfv\" (UID: \"416bad8a-e265-443e-b753-5025262786fe\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.417744 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wthzr\" (UniqueName: \"kubernetes.io/projected/90c59ae7-cadc-4d83-9a27-45619b5f3659-kube-api-access-wthzr\") on node \"crc\" DevicePath \"\"" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.417761 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90c59ae7-cadc-4d83-9a27-45619b5f3659-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.418521 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/416bad8a-e265-443e-b753-5025262786fe-client-ca\") pod \"route-controller-manager-7ccc7f67b8-65qfv\" (UID: \"416bad8a-e265-443e-b753-5025262786fe\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.418852 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/416bad8a-e265-443e-b753-5025262786fe-config\") pod \"route-controller-manager-7ccc7f67b8-65qfv\" (UID: \"416bad8a-e265-443e-b753-5025262786fe\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.420869 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/416bad8a-e265-443e-b753-5025262786fe-serving-cert\") pod \"route-controller-manager-7ccc7f67b8-65qfv\" (UID: \"416bad8a-e265-443e-b753-5025262786fe\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.437694 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6stmz\" (UniqueName: \"kubernetes.io/projected/416bad8a-e265-443e-b753-5025262786fe-kube-api-access-6stmz\") pod \"route-controller-manager-7ccc7f67b8-65qfv\" (UID: \"416bad8a-e265-443e-b753-5025262786fe\") " pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.602577 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.721792 4827 patch_prober.go:28] interesting pod/downloads-7954f5f757-sgzkm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 03:49:57 crc kubenswrapper[4827]: I0131 03:49:57.721901 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sgzkm" podUID="6e659a59-19ab-4c91-98ec-db3042ac1d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 03:49:58 crc kubenswrapper[4827]: I0131 03:49:58.049939 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" event={"ID":"90c59ae7-cadc-4d83-9a27-45619b5f3659","Type":"ContainerDied","Data":"20a4e2573f66cd96b5d674dff22636353b1e75527301f8a80565cb4feb223c55"} Jan 31 03:49:58 crc kubenswrapper[4827]: I0131 03:49:58.049977 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh" Jan 31 03:49:58 crc kubenswrapper[4827]: I0131 03:49:58.050023 4827 scope.go:117] "RemoveContainer" containerID="55d1d9671ad60fd255cc2e362ad73f621edde5985b4e5aab99e2ea97e12d5228" Jan 31 03:49:58 crc kubenswrapper[4827]: I0131 03:49:58.092821 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh"] Jan 31 03:49:58 crc kubenswrapper[4827]: I0131 03:49:58.095756 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c65bffc45-hqmdh"] Jan 31 03:49:58 crc kubenswrapper[4827]: I0131 03:49:58.116085 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90c59ae7-cadc-4d83-9a27-45619b5f3659" path="/var/lib/kubelet/pods/90c59ae7-cadc-4d83-9a27-45619b5f3659/volumes" Jan 31 03:49:58 crc kubenswrapper[4827]: E0131 03:49:58.601203 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8lngw" podUID="e357c738-a2f2-49a3-b122-5fe5ab45b919" Jan 31 03:49:58 crc kubenswrapper[4827]: E0131 03:49:58.690377 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 03:49:58 crc kubenswrapper[4827]: E0131 03:49:58.690542 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ssk2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-wr7t4_openshift-marketplace(273683f4-0b94-44d7-83a2-b540f4d5d81d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 03:49:58 crc kubenswrapper[4827]: E0131 03:49:58.691744 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-wr7t4" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" Jan 31 03:49:58 crc kubenswrapper[4827]: I0131 03:49:58.876913 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 03:49:58 crc kubenswrapper[4827]: I0131 03:49:58.877564 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 03:49:58 crc kubenswrapper[4827]: I0131 03:49:58.879843 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 03:49:58 crc kubenswrapper[4827]: I0131 03:49:58.880653 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 03:49:58 crc kubenswrapper[4827]: I0131 03:49:58.890370 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 03:49:58 crc kubenswrapper[4827]: I0131 03:49:58.938059 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/216831b5-228a-4da4-b592-6901dd531298-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"216831b5-228a-4da4-b592-6901dd531298\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 03:49:58 crc kubenswrapper[4827]: I0131 03:49:58.938247 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/216831b5-228a-4da4-b592-6901dd531298-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"216831b5-228a-4da4-b592-6901dd531298\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 03:49:59 crc kubenswrapper[4827]: I0131 03:49:59.039687 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/216831b5-228a-4da4-b592-6901dd531298-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"216831b5-228a-4da4-b592-6901dd531298\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 03:49:59 crc kubenswrapper[4827]: I0131 03:49:59.039772 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/216831b5-228a-4da4-b592-6901dd531298-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"216831b5-228a-4da4-b592-6901dd531298\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 03:49:59 crc kubenswrapper[4827]: I0131 03:49:59.039893 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/216831b5-228a-4da4-b592-6901dd531298-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"216831b5-228a-4da4-b592-6901dd531298\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 03:49:59 crc kubenswrapper[4827]: I0131 03:49:59.073872 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/216831b5-228a-4da4-b592-6901dd531298-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"216831b5-228a-4da4-b592-6901dd531298\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 03:49:59 crc kubenswrapper[4827]: I0131 03:49:59.213121 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 03:50:02 crc kubenswrapper[4827]: E0131 03:50:02.310314 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 03:50:02 crc kubenswrapper[4827]: E0131 03:50:02.310976 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ltnf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-c2d85_openshift-marketplace(f3ede25d-6d79-44f7-a853-88b36723eb92): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 03:50:02 crc kubenswrapper[4827]: E0131 03:50:02.312150 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-c2d85" podUID="f3ede25d-6d79-44f7-a853-88b36723eb92" Jan 31 03:50:02 crc kubenswrapper[4827]: E0131 03:50:02.332603 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 03:50:02 crc kubenswrapper[4827]: E0131 03:50:02.332813 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7lw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-h54h5_openshift-marketplace(a4c93e4f-eac3-4794-a748-51adfd8b961c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 03:50:02 crc kubenswrapper[4827]: E0131 03:50:02.333954 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-h54h5" podUID="a4c93e4f-eac3-4794-a748-51adfd8b961c" Jan 31 03:50:03 crc kubenswrapper[4827]: E0131 03:50:03.698253 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c2d85" podUID="f3ede25d-6d79-44f7-a853-88b36723eb92" Jan 31 03:50:03 crc kubenswrapper[4827]: E0131 03:50:03.807133 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 03:50:03 crc kubenswrapper[4827]: E0131 03:50:03.807599 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkwk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8rcw5_openshift-marketplace(062f8208-e13f-439f-bb1f-13b9c91c5ea3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 03:50:03 crc kubenswrapper[4827]: E0131 03:50:03.808782 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8rcw5" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" Jan 31 03:50:04 crc kubenswrapper[4827]: I0131 03:50:04.275848 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 03:50:04 crc kubenswrapper[4827]: I0131 03:50:04.277090 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:50:04 crc kubenswrapper[4827]: I0131 03:50:04.285784 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 03:50:04 crc kubenswrapper[4827]: I0131 03:50:04.417452 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f1f631cd-2800-402c-9fe1-06af2bc620fd-var-lock\") pod \"installer-9-crc\" (UID: \"f1f631cd-2800-402c-9fe1-06af2bc620fd\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:50:04 crc kubenswrapper[4827]: I0131 03:50:04.417496 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1f631cd-2800-402c-9fe1-06af2bc620fd-kube-api-access\") pod \"installer-9-crc\" (UID: \"f1f631cd-2800-402c-9fe1-06af2bc620fd\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:50:04 crc kubenswrapper[4827]: I0131 03:50:04.417529 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1f631cd-2800-402c-9fe1-06af2bc620fd-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f1f631cd-2800-402c-9fe1-06af2bc620fd\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:50:04 crc kubenswrapper[4827]: I0131 03:50:04.519171 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f1f631cd-2800-402c-9fe1-06af2bc620fd-var-lock\") pod \"installer-9-crc\" (UID: \"f1f631cd-2800-402c-9fe1-06af2bc620fd\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:50:04 crc kubenswrapper[4827]: I0131 03:50:04.519221 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1f631cd-2800-402c-9fe1-06af2bc620fd-kube-api-access\") pod \"installer-9-crc\" (UID: \"f1f631cd-2800-402c-9fe1-06af2bc620fd\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:50:04 crc kubenswrapper[4827]: I0131 03:50:04.519252 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1f631cd-2800-402c-9fe1-06af2bc620fd-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f1f631cd-2800-402c-9fe1-06af2bc620fd\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:50:04 crc kubenswrapper[4827]: I0131 03:50:04.519315 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1f631cd-2800-402c-9fe1-06af2bc620fd-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f1f631cd-2800-402c-9fe1-06af2bc620fd\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:50:04 crc kubenswrapper[4827]: I0131 03:50:04.519351 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f1f631cd-2800-402c-9fe1-06af2bc620fd-var-lock\") pod \"installer-9-crc\" (UID: \"f1f631cd-2800-402c-9fe1-06af2bc620fd\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:50:04 crc kubenswrapper[4827]: I0131 03:50:04.539145 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1f631cd-2800-402c-9fe1-06af2bc620fd-kube-api-access\") pod \"installer-9-crc\" (UID: \"f1f631cd-2800-402c-9fe1-06af2bc620fd\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:50:04 crc kubenswrapper[4827]: I0131 03:50:04.605095 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:50:06 crc kubenswrapper[4827]: I0131 03:50:06.458090 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh"] Jan 31 03:50:06 crc kubenswrapper[4827]: W0131 03:50:06.470819 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod115d98fe_a709_4ffa_9afd_47751519e331.slice/crio-2e550f383ada424e91c7ecf4374757a498bc7b2e1d13fb857d6af361b8664b4d WatchSource:0}: Error finding container 2e550f383ada424e91c7ecf4374757a498bc7b2e1d13fb857d6af361b8664b4d: Status 404 returned error can't find the container with id 2e550f383ada424e91c7ecf4374757a498bc7b2e1d13fb857d6af361b8664b4d Jan 31 03:50:06 crc kubenswrapper[4827]: I0131 03:50:06.490109 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv"] Jan 31 03:50:06 crc kubenswrapper[4827]: I0131 03:50:06.497725 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 03:50:06 crc kubenswrapper[4827]: W0131 03:50:06.499435 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf1f631cd_2800_402c_9fe1_06af2bc620fd.slice/crio-e30c3ba2b5d51ac18429c4be6045ca325177669f29a2ba17c2bfcf022b8549a7 WatchSource:0}: Error finding container e30c3ba2b5d51ac18429c4be6045ca325177669f29a2ba17c2bfcf022b8549a7: Status 404 returned error can't find the container with id e30c3ba2b5d51ac18429c4be6045ca325177669f29a2ba17c2bfcf022b8549a7 Jan 31 03:50:06 crc kubenswrapper[4827]: W0131 03:50:06.500557 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod416bad8a_e265_443e_b753_5025262786fe.slice/crio-16fe6112620ca44dff286771460bce6cac5d85c450f1e154d79d190d1735e625 WatchSource:0}: Error finding container 16fe6112620ca44dff286771460bce6cac5d85c450f1e154d79d190d1735e625: Status 404 returned error can't find the container with id 16fe6112620ca44dff286771460bce6cac5d85c450f1e154d79d190d1735e625 Jan 31 03:50:06 crc kubenswrapper[4827]: I0131 03:50:06.539847 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 03:50:06 crc kubenswrapper[4827]: E0131 03:50:06.752480 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 03:50:06 crc kubenswrapper[4827]: E0131 03:50:06.752926 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqt67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jpjgt_openshift-marketplace(b53b07cf-d0d5-4774-89fe-89765537cc9b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 03:50:06 crc kubenswrapper[4827]: E0131 03:50:06.754393 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-jpjgt" podUID="b53b07cf-d0d5-4774-89fe-89765537cc9b" Jan 31 03:50:06 crc kubenswrapper[4827]: E0131 03:50:06.818794 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 03:50:06 crc kubenswrapper[4827]: E0131 03:50:06.818979 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-995cj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g76wz_openshift-marketplace(36f2dbb1-6370-4a38-8702-edf89c8b4668): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 03:50:06 crc kubenswrapper[4827]: E0131 03:50:06.820138 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-g76wz" podUID="36f2dbb1-6370-4a38-8702-edf89c8b4668" Jan 31 03:50:06 crc kubenswrapper[4827]: E0131 03:50:06.894970 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 03:50:06 crc kubenswrapper[4827]: E0131 03:50:06.895127 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-87x6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-r4gn2_openshift-marketplace(94e2d804-29e9-4233-adda-45072b493f0f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 03:50:06 crc kubenswrapper[4827]: E0131 03:50:06.897059 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-r4gn2" podUID="94e2d804-29e9-4233-adda-45072b493f0f" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.119172 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f1f631cd-2800-402c-9fe1-06af2bc620fd","Type":"ContainerStarted","Data":"375f29f668d0ce9011a44f9645a5a47c6754ececcca7b5e6f78e7d2eac5a1160"} Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.119228 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f1f631cd-2800-402c-9fe1-06af2bc620fd","Type":"ContainerStarted","Data":"e30c3ba2b5d51ac18429c4be6045ca325177669f29a2ba17c2bfcf022b8549a7"} Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.120716 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"216831b5-228a-4da4-b592-6901dd531298","Type":"ContainerStarted","Data":"f4b30d7f67f2db76090a130ed3224ae2b939de7f0b82612c0dafb785003c66a5"} Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.120765 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"216831b5-228a-4da4-b592-6901dd531298","Type":"ContainerStarted","Data":"3146abc2d19b30fe1f7c85564bf7e259454fe4fc340d7e54da50cc6084f97fd9"} Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.123080 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sgzkm" event={"ID":"6e659a59-19ab-4c91-98ec-db3042ac1d4b","Type":"ContainerStarted","Data":"f86560fea88f800a4b84a2131b67d62cc126619f2ac322678268a0c9dbe822ad"} Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.123307 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-sgzkm" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.123559 4827 patch_prober.go:28] interesting pod/downloads-7954f5f757-sgzkm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.123602 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sgzkm" podUID="6e659a59-19ab-4c91-98ec-db3042ac1d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.124461 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" event={"ID":"416bad8a-e265-443e-b753-5025262786fe","Type":"ContainerStarted","Data":"10002f346ee2755e57101a5dc0415d1b1bdb1960a334417a79d246658c1bc3ee"} Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.124496 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" event={"ID":"416bad8a-e265-443e-b753-5025262786fe","Type":"ContainerStarted","Data":"16fe6112620ca44dff286771460bce6cac5d85c450f1e154d79d190d1735e625"} Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.124682 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.126089 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" event={"ID":"115d98fe-a709-4ffa-9afd-47751519e331","Type":"ContainerStarted","Data":"a5652868228cadb7c565891c0fa5698172a718fb826f3ee136c163379d2e6fd9"} Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.126133 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" event={"ID":"115d98fe-a709-4ffa-9afd-47751519e331","Type":"ContainerStarted","Data":"2e550f383ada424e91c7ecf4374757a498bc7b2e1d13fb857d6af361b8664b4d"} Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.126190 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" podUID="115d98fe-a709-4ffa-9afd-47751519e331" containerName="controller-manager" containerID="cri-o://a5652868228cadb7c565891c0fa5698172a718fb826f3ee136c163379d2e6fd9" gracePeriod=30 Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.126355 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" Jan 31 03:50:07 crc kubenswrapper[4827]: E0131 03:50:07.127869 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-r4gn2" podUID="94e2d804-29e9-4233-adda-45072b493f0f" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.135911 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.141210 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.141193273 podStartE2EDuration="3.141193273s" podCreationTimestamp="2026-01-31 03:50:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:07.139225147 +0000 UTC m=+199.826305646" watchObservedRunningTime="2026-01-31 03:50:07.141193273 +0000 UTC m=+199.828273722" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.166664 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" podStartSLOduration=36.166645568 podStartE2EDuration="36.166645568s" podCreationTimestamp="2026-01-31 03:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:07.1637345 +0000 UTC m=+199.850814949" watchObservedRunningTime="2026-01-31 03:50:07.166645568 +0000 UTC m=+199.853726017" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.239926 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=9.239907139 podStartE2EDuration="9.239907139s" podCreationTimestamp="2026-01-31 03:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:07.236192875 +0000 UTC m=+199.923273324" watchObservedRunningTime="2026-01-31 03:50:07.239907139 +0000 UTC m=+199.926987588" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.261976 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" podStartSLOduration=16.26195674 podStartE2EDuration="16.26195674s" podCreationTimestamp="2026-01-31 03:49:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:07.255839265 +0000 UTC m=+199.942919714" watchObservedRunningTime="2026-01-31 03:50:07.26195674 +0000 UTC m=+199.949037189" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.276678 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.498756 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.526847 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-849ff9796-h2t6h"] Jan 31 03:50:07 crc kubenswrapper[4827]: E0131 03:50:07.527068 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115d98fe-a709-4ffa-9afd-47751519e331" containerName="controller-manager" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.527081 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="115d98fe-a709-4ffa-9afd-47751519e331" containerName="controller-manager" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.527174 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="115d98fe-a709-4ffa-9afd-47751519e331" containerName="controller-manager" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.527515 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.534732 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-849ff9796-h2t6h"] Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.559632 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6sch\" (UniqueName: \"kubernetes.io/projected/115d98fe-a709-4ffa-9afd-47751519e331-kube-api-access-c6sch\") pod \"115d98fe-a709-4ffa-9afd-47751519e331\" (UID: \"115d98fe-a709-4ffa-9afd-47751519e331\") " Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.559669 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/115d98fe-a709-4ffa-9afd-47751519e331-client-ca\") pod \"115d98fe-a709-4ffa-9afd-47751519e331\" (UID: \"115d98fe-a709-4ffa-9afd-47751519e331\") " Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.559715 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/115d98fe-a709-4ffa-9afd-47751519e331-proxy-ca-bundles\") pod \"115d98fe-a709-4ffa-9afd-47751519e331\" (UID: \"115d98fe-a709-4ffa-9afd-47751519e331\") " Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.559732 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/115d98fe-a709-4ffa-9afd-47751519e331-serving-cert\") pod \"115d98fe-a709-4ffa-9afd-47751519e331\" (UID: \"115d98fe-a709-4ffa-9afd-47751519e331\") " Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.559839 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/115d98fe-a709-4ffa-9afd-47751519e331-config\") pod \"115d98fe-a709-4ffa-9afd-47751519e331\" (UID: \"115d98fe-a709-4ffa-9afd-47751519e331\") " Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.560536 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/115d98fe-a709-4ffa-9afd-47751519e331-client-ca" (OuterVolumeSpecName: "client-ca") pod "115d98fe-a709-4ffa-9afd-47751519e331" (UID: "115d98fe-a709-4ffa-9afd-47751519e331"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.560560 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/115d98fe-a709-4ffa-9afd-47751519e331-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "115d98fe-a709-4ffa-9afd-47751519e331" (UID: "115d98fe-a709-4ffa-9afd-47751519e331"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.560599 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/115d98fe-a709-4ffa-9afd-47751519e331-config" (OuterVolumeSpecName: "config") pod "115d98fe-a709-4ffa-9afd-47751519e331" (UID: "115d98fe-a709-4ffa-9afd-47751519e331"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.565398 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115d98fe-a709-4ffa-9afd-47751519e331-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "115d98fe-a709-4ffa-9afd-47751519e331" (UID: "115d98fe-a709-4ffa-9afd-47751519e331"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.565545 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/115d98fe-a709-4ffa-9afd-47751519e331-kube-api-access-c6sch" (OuterVolumeSpecName: "kube-api-access-c6sch") pod "115d98fe-a709-4ffa-9afd-47751519e331" (UID: "115d98fe-a709-4ffa-9afd-47751519e331"). InnerVolumeSpecName "kube-api-access-c6sch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.661519 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d42cab75-5850-473d-9b36-1c55858fe5ee-serving-cert\") pod \"controller-manager-849ff9796-h2t6h\" (UID: \"d42cab75-5850-473d-9b36-1c55858fe5ee\") " pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.661565 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d42cab75-5850-473d-9b36-1c55858fe5ee-client-ca\") pod \"controller-manager-849ff9796-h2t6h\" (UID: \"d42cab75-5850-473d-9b36-1c55858fe5ee\") " pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.661602 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d42cab75-5850-473d-9b36-1c55858fe5ee-proxy-ca-bundles\") pod \"controller-manager-849ff9796-h2t6h\" (UID: \"d42cab75-5850-473d-9b36-1c55858fe5ee\") " pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.661621 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d42cab75-5850-473d-9b36-1c55858fe5ee-config\") pod \"controller-manager-849ff9796-h2t6h\" (UID: \"d42cab75-5850-473d-9b36-1c55858fe5ee\") " pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.661771 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxzpn\" (UniqueName: \"kubernetes.io/projected/d42cab75-5850-473d-9b36-1c55858fe5ee-kube-api-access-nxzpn\") pod \"controller-manager-849ff9796-h2t6h\" (UID: \"d42cab75-5850-473d-9b36-1c55858fe5ee\") " pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.661983 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/115d98fe-a709-4ffa-9afd-47751519e331-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.661995 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6sch\" (UniqueName: \"kubernetes.io/projected/115d98fe-a709-4ffa-9afd-47751519e331-kube-api-access-c6sch\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.662004 4827 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/115d98fe-a709-4ffa-9afd-47751519e331-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.662013 4827 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/115d98fe-a709-4ffa-9afd-47751519e331-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.662021 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/115d98fe-a709-4ffa-9afd-47751519e331-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.720623 4827 patch_prober.go:28] interesting pod/downloads-7954f5f757-sgzkm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.720673 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sgzkm" podUID="6e659a59-19ab-4c91-98ec-db3042ac1d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.721145 4827 patch_prober.go:28] interesting pod/downloads-7954f5f757-sgzkm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.721202 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-sgzkm" podUID="6e659a59-19ab-4c91-98ec-db3042ac1d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.762995 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d42cab75-5850-473d-9b36-1c55858fe5ee-serving-cert\") pod \"controller-manager-849ff9796-h2t6h\" (UID: \"d42cab75-5850-473d-9b36-1c55858fe5ee\") " pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.763038 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d42cab75-5850-473d-9b36-1c55858fe5ee-client-ca\") pod \"controller-manager-849ff9796-h2t6h\" (UID: \"d42cab75-5850-473d-9b36-1c55858fe5ee\") " pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.763074 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d42cab75-5850-473d-9b36-1c55858fe5ee-proxy-ca-bundles\") pod \"controller-manager-849ff9796-h2t6h\" (UID: \"d42cab75-5850-473d-9b36-1c55858fe5ee\") " pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.763094 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d42cab75-5850-473d-9b36-1c55858fe5ee-config\") pod \"controller-manager-849ff9796-h2t6h\" (UID: \"d42cab75-5850-473d-9b36-1c55858fe5ee\") " pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.763120 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxzpn\" (UniqueName: \"kubernetes.io/projected/d42cab75-5850-473d-9b36-1c55858fe5ee-kube-api-access-nxzpn\") pod \"controller-manager-849ff9796-h2t6h\" (UID: \"d42cab75-5850-473d-9b36-1c55858fe5ee\") " pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.764281 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d42cab75-5850-473d-9b36-1c55858fe5ee-client-ca\") pod \"controller-manager-849ff9796-h2t6h\" (UID: \"d42cab75-5850-473d-9b36-1c55858fe5ee\") " pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.764593 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d42cab75-5850-473d-9b36-1c55858fe5ee-config\") pod \"controller-manager-849ff9796-h2t6h\" (UID: \"d42cab75-5850-473d-9b36-1c55858fe5ee\") " pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.765646 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d42cab75-5850-473d-9b36-1c55858fe5ee-proxy-ca-bundles\") pod \"controller-manager-849ff9796-h2t6h\" (UID: \"d42cab75-5850-473d-9b36-1c55858fe5ee\") " pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.767514 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d42cab75-5850-473d-9b36-1c55858fe5ee-serving-cert\") pod \"controller-manager-849ff9796-h2t6h\" (UID: \"d42cab75-5850-473d-9b36-1c55858fe5ee\") " pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.780112 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxzpn\" (UniqueName: \"kubernetes.io/projected/d42cab75-5850-473d-9b36-1c55858fe5ee-kube-api-access-nxzpn\") pod \"controller-manager-849ff9796-h2t6h\" (UID: \"d42cab75-5850-473d-9b36-1c55858fe5ee\") " pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" Jan 31 03:50:07 crc kubenswrapper[4827]: I0131 03:50:07.908764 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" Jan 31 03:50:08 crc kubenswrapper[4827]: I0131 03:50:08.132536 4827 generic.go:334] "Generic (PLEG): container finished" podID="216831b5-228a-4da4-b592-6901dd531298" containerID="f4b30d7f67f2db76090a130ed3224ae2b939de7f0b82612c0dafb785003c66a5" exitCode=0 Jan 31 03:50:08 crc kubenswrapper[4827]: I0131 03:50:08.132640 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"216831b5-228a-4da4-b592-6901dd531298","Type":"ContainerDied","Data":"f4b30d7f67f2db76090a130ed3224ae2b939de7f0b82612c0dafb785003c66a5"} Jan 31 03:50:08 crc kubenswrapper[4827]: I0131 03:50:08.137521 4827 generic.go:334] "Generic (PLEG): container finished" podID="115d98fe-a709-4ffa-9afd-47751519e331" containerID="a5652868228cadb7c565891c0fa5698172a718fb826f3ee136c163379d2e6fd9" exitCode=0 Jan 31 03:50:08 crc kubenswrapper[4827]: I0131 03:50:08.137608 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" Jan 31 03:50:08 crc kubenswrapper[4827]: I0131 03:50:08.137741 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" event={"ID":"115d98fe-a709-4ffa-9afd-47751519e331","Type":"ContainerDied","Data":"a5652868228cadb7c565891c0fa5698172a718fb826f3ee136c163379d2e6fd9"} Jan 31 03:50:08 crc kubenswrapper[4827]: I0131 03:50:08.137777 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh" event={"ID":"115d98fe-a709-4ffa-9afd-47751519e331","Type":"ContainerDied","Data":"2e550f383ada424e91c7ecf4374757a498bc7b2e1d13fb857d6af361b8664b4d"} Jan 31 03:50:08 crc kubenswrapper[4827]: I0131 03:50:08.137831 4827 scope.go:117] "RemoveContainer" containerID="a5652868228cadb7c565891c0fa5698172a718fb826f3ee136c163379d2e6fd9" Jan 31 03:50:08 crc kubenswrapper[4827]: I0131 03:50:08.138664 4827 patch_prober.go:28] interesting pod/downloads-7954f5f757-sgzkm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 03:50:08 crc kubenswrapper[4827]: I0131 03:50:08.138711 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sgzkm" podUID="6e659a59-19ab-4c91-98ec-db3042ac1d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 03:50:08 crc kubenswrapper[4827]: I0131 03:50:08.168624 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh"] Jan 31 03:50:08 crc kubenswrapper[4827]: I0131 03:50:08.172155 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-84cc4c45f9-x8kdh"] Jan 31 03:50:08 crc kubenswrapper[4827]: I0131 03:50:08.190722 4827 scope.go:117] "RemoveContainer" containerID="a5652868228cadb7c565891c0fa5698172a718fb826f3ee136c163379d2e6fd9" Jan 31 03:50:08 crc kubenswrapper[4827]: E0131 03:50:08.191138 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5652868228cadb7c565891c0fa5698172a718fb826f3ee136c163379d2e6fd9\": container with ID starting with a5652868228cadb7c565891c0fa5698172a718fb826f3ee136c163379d2e6fd9 not found: ID does not exist" containerID="a5652868228cadb7c565891c0fa5698172a718fb826f3ee136c163379d2e6fd9" Jan 31 03:50:08 crc kubenswrapper[4827]: I0131 03:50:08.191177 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5652868228cadb7c565891c0fa5698172a718fb826f3ee136c163379d2e6fd9"} err="failed to get container status \"a5652868228cadb7c565891c0fa5698172a718fb826f3ee136c163379d2e6fd9\": rpc error: code = NotFound desc = could not find container \"a5652868228cadb7c565891c0fa5698172a718fb826f3ee136c163379d2e6fd9\": container with ID starting with a5652868228cadb7c565891c0fa5698172a718fb826f3ee136c163379d2e6fd9 not found: ID does not exist" Jan 31 03:50:08 crc kubenswrapper[4827]: I0131 03:50:08.325008 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-849ff9796-h2t6h"] Jan 31 03:50:08 crc kubenswrapper[4827]: W0131 03:50:08.339373 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd42cab75_5850_473d_9b36_1c55858fe5ee.slice/crio-3ebdcf5bc70010eb03d9b09dac29d9478d4e5cb6249b53e80941dee3b112bc09 WatchSource:0}: Error finding container 3ebdcf5bc70010eb03d9b09dac29d9478d4e5cb6249b53e80941dee3b112bc09: Status 404 returned error can't find the container with id 3ebdcf5bc70010eb03d9b09dac29d9478d4e5cb6249b53e80941dee3b112bc09 Jan 31 03:50:09 crc kubenswrapper[4827]: I0131 03:50:09.143480 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" event={"ID":"d42cab75-5850-473d-9b36-1c55858fe5ee","Type":"ContainerStarted","Data":"3ebdcf5bc70010eb03d9b09dac29d9478d4e5cb6249b53e80941dee3b112bc09"} Jan 31 03:50:09 crc kubenswrapper[4827]: I0131 03:50:09.370659 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 03:50:09 crc kubenswrapper[4827]: I0131 03:50:09.484975 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/216831b5-228a-4da4-b592-6901dd531298-kube-api-access\") pod \"216831b5-228a-4da4-b592-6901dd531298\" (UID: \"216831b5-228a-4da4-b592-6901dd531298\") " Jan 31 03:50:09 crc kubenswrapper[4827]: I0131 03:50:09.485040 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/216831b5-228a-4da4-b592-6901dd531298-kubelet-dir\") pod \"216831b5-228a-4da4-b592-6901dd531298\" (UID: \"216831b5-228a-4da4-b592-6901dd531298\") " Jan 31 03:50:09 crc kubenswrapper[4827]: I0131 03:50:09.485417 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/216831b5-228a-4da4-b592-6901dd531298-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "216831b5-228a-4da4-b592-6901dd531298" (UID: "216831b5-228a-4da4-b592-6901dd531298"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:50:09 crc kubenswrapper[4827]: I0131 03:50:09.491225 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216831b5-228a-4da4-b592-6901dd531298-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "216831b5-228a-4da4-b592-6901dd531298" (UID: "216831b5-228a-4da4-b592-6901dd531298"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:50:09 crc kubenswrapper[4827]: I0131 03:50:09.591516 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/216831b5-228a-4da4-b592-6901dd531298-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:09 crc kubenswrapper[4827]: I0131 03:50:09.591554 4827 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/216831b5-228a-4da4-b592-6901dd531298-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:10 crc kubenswrapper[4827]: I0131 03:50:10.115820 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="115d98fe-a709-4ffa-9afd-47751519e331" path="/var/lib/kubelet/pods/115d98fe-a709-4ffa-9afd-47751519e331/volumes" Jan 31 03:50:10 crc kubenswrapper[4827]: I0131 03:50:10.149335 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" event={"ID":"d42cab75-5850-473d-9b36-1c55858fe5ee","Type":"ContainerStarted","Data":"c4c5ae3b34831b5776ce0a6a2da28118fe4674b99cf4978865d238028f3736ec"} Jan 31 03:50:10 crc kubenswrapper[4827]: I0131 03:50:10.149600 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" Jan 31 03:50:10 crc kubenswrapper[4827]: I0131 03:50:10.152378 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"216831b5-228a-4da4-b592-6901dd531298","Type":"ContainerDied","Data":"3146abc2d19b30fe1f7c85564bf7e259454fe4fc340d7e54da50cc6084f97fd9"} Jan 31 03:50:10 crc kubenswrapper[4827]: I0131 03:50:10.152436 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3146abc2d19b30fe1f7c85564bf7e259454fe4fc340d7e54da50cc6084f97fd9" Jan 31 03:50:10 crc kubenswrapper[4827]: I0131 03:50:10.152449 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 03:50:10 crc kubenswrapper[4827]: I0131 03:50:10.156663 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" Jan 31 03:50:10 crc kubenswrapper[4827]: I0131 03:50:10.170761 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" podStartSLOduration=19.170742408 podStartE2EDuration="19.170742408s" podCreationTimestamp="2026-01-31 03:49:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:10.169678793 +0000 UTC m=+202.856759262" watchObservedRunningTime="2026-01-31 03:50:10.170742408 +0000 UTC m=+202.857822857" Jan 31 03:50:11 crc kubenswrapper[4827]: I0131 03:50:11.495725 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-849ff9796-h2t6h"] Jan 31 03:50:11 crc kubenswrapper[4827]: I0131 03:50:11.518555 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv"] Jan 31 03:50:11 crc kubenswrapper[4827]: I0131 03:50:11.518809 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" podUID="416bad8a-e265-443e-b753-5025262786fe" containerName="route-controller-manager" containerID="cri-o://10002f346ee2755e57101a5dc0415d1b1bdb1960a334417a79d246658c1bc3ee" gracePeriod=30 Jan 31 03:50:12 crc kubenswrapper[4827]: I0131 03:50:12.165278 4827 generic.go:334] "Generic (PLEG): container finished" podID="416bad8a-e265-443e-b753-5025262786fe" containerID="10002f346ee2755e57101a5dc0415d1b1bdb1960a334417a79d246658c1bc3ee" exitCode=0 Jan 31 03:50:12 crc kubenswrapper[4827]: I0131 03:50:12.165375 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" event={"ID":"416bad8a-e265-443e-b753-5025262786fe","Type":"ContainerDied","Data":"10002f346ee2755e57101a5dc0415d1b1bdb1960a334417a79d246658c1bc3ee"} Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.122905 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.152609 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd"] Jan 31 03:50:13 crc kubenswrapper[4827]: E0131 03:50:13.152979 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416bad8a-e265-443e-b753-5025262786fe" containerName="route-controller-manager" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.153001 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="416bad8a-e265-443e-b753-5025262786fe" containerName="route-controller-manager" Jan 31 03:50:13 crc kubenswrapper[4827]: E0131 03:50:13.153012 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216831b5-228a-4da4-b592-6901dd531298" containerName="pruner" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.153021 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="216831b5-228a-4da4-b592-6901dd531298" containerName="pruner" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.153137 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="416bad8a-e265-443e-b753-5025262786fe" containerName="route-controller-manager" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.153159 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="216831b5-228a-4da4-b592-6901dd531298" containerName="pruner" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.153776 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.166446 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd"] Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.177158 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" podUID="d42cab75-5850-473d-9b36-1c55858fe5ee" containerName="controller-manager" containerID="cri-o://c4c5ae3b34831b5776ce0a6a2da28118fe4674b99cf4978865d238028f3736ec" gracePeriod=30 Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.177336 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.177774 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv" event={"ID":"416bad8a-e265-443e-b753-5025262786fe","Type":"ContainerDied","Data":"16fe6112620ca44dff286771460bce6cac5d85c450f1e154d79d190d1735e625"} Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.177819 4827 scope.go:117] "RemoveContainer" containerID="10002f346ee2755e57101a5dc0415d1b1bdb1960a334417a79d246658c1bc3ee" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.242475 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/416bad8a-e265-443e-b753-5025262786fe-client-ca\") pod \"416bad8a-e265-443e-b753-5025262786fe\" (UID: \"416bad8a-e265-443e-b753-5025262786fe\") " Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.242614 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6stmz\" (UniqueName: \"kubernetes.io/projected/416bad8a-e265-443e-b753-5025262786fe-kube-api-access-6stmz\") pod \"416bad8a-e265-443e-b753-5025262786fe\" (UID: \"416bad8a-e265-443e-b753-5025262786fe\") " Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.242657 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/416bad8a-e265-443e-b753-5025262786fe-config\") pod \"416bad8a-e265-443e-b753-5025262786fe\" (UID: \"416bad8a-e265-443e-b753-5025262786fe\") " Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.242704 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/416bad8a-e265-443e-b753-5025262786fe-serving-cert\") pod \"416bad8a-e265-443e-b753-5025262786fe\" (UID: \"416bad8a-e265-443e-b753-5025262786fe\") " Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.242978 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4xx4\" (UniqueName: \"kubernetes.io/projected/17f6d164-a958-46a2-92a7-fc25ee889c44-kube-api-access-t4xx4\") pod \"route-controller-manager-754d5cf94c-jqrwd\" (UID: \"17f6d164-a958-46a2-92a7-fc25ee889c44\") " pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.243026 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f6d164-a958-46a2-92a7-fc25ee889c44-config\") pod \"route-controller-manager-754d5cf94c-jqrwd\" (UID: \"17f6d164-a958-46a2-92a7-fc25ee889c44\") " pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.243083 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f6d164-a958-46a2-92a7-fc25ee889c44-serving-cert\") pod \"route-controller-manager-754d5cf94c-jqrwd\" (UID: \"17f6d164-a958-46a2-92a7-fc25ee889c44\") " pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.243236 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17f6d164-a958-46a2-92a7-fc25ee889c44-client-ca\") pod \"route-controller-manager-754d5cf94c-jqrwd\" (UID: \"17f6d164-a958-46a2-92a7-fc25ee889c44\") " pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.243433 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/416bad8a-e265-443e-b753-5025262786fe-config" (OuterVolumeSpecName: "config") pod "416bad8a-e265-443e-b753-5025262786fe" (UID: "416bad8a-e265-443e-b753-5025262786fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.243791 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/416bad8a-e265-443e-b753-5025262786fe-client-ca" (OuterVolumeSpecName: "client-ca") pod "416bad8a-e265-443e-b753-5025262786fe" (UID: "416bad8a-e265-443e-b753-5025262786fe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.253624 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/416bad8a-e265-443e-b753-5025262786fe-kube-api-access-6stmz" (OuterVolumeSpecName: "kube-api-access-6stmz") pod "416bad8a-e265-443e-b753-5025262786fe" (UID: "416bad8a-e265-443e-b753-5025262786fe"). InnerVolumeSpecName "kube-api-access-6stmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.259488 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/416bad8a-e265-443e-b753-5025262786fe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "416bad8a-e265-443e-b753-5025262786fe" (UID: "416bad8a-e265-443e-b753-5025262786fe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.344797 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4xx4\" (UniqueName: \"kubernetes.io/projected/17f6d164-a958-46a2-92a7-fc25ee889c44-kube-api-access-t4xx4\") pod \"route-controller-manager-754d5cf94c-jqrwd\" (UID: \"17f6d164-a958-46a2-92a7-fc25ee889c44\") " pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.344857 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f6d164-a958-46a2-92a7-fc25ee889c44-config\") pod \"route-controller-manager-754d5cf94c-jqrwd\" (UID: \"17f6d164-a958-46a2-92a7-fc25ee889c44\") " pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.344907 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f6d164-a958-46a2-92a7-fc25ee889c44-serving-cert\") pod \"route-controller-manager-754d5cf94c-jqrwd\" (UID: \"17f6d164-a958-46a2-92a7-fc25ee889c44\") " pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.344935 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17f6d164-a958-46a2-92a7-fc25ee889c44-client-ca\") pod \"route-controller-manager-754d5cf94c-jqrwd\" (UID: \"17f6d164-a958-46a2-92a7-fc25ee889c44\") " pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.345009 4827 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/416bad8a-e265-443e-b753-5025262786fe-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.345023 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6stmz\" (UniqueName: \"kubernetes.io/projected/416bad8a-e265-443e-b753-5025262786fe-kube-api-access-6stmz\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.345034 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/416bad8a-e265-443e-b753-5025262786fe-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.345042 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/416bad8a-e265-443e-b753-5025262786fe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.346012 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17f6d164-a958-46a2-92a7-fc25ee889c44-client-ca\") pod \"route-controller-manager-754d5cf94c-jqrwd\" (UID: \"17f6d164-a958-46a2-92a7-fc25ee889c44\") " pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.346976 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f6d164-a958-46a2-92a7-fc25ee889c44-config\") pod \"route-controller-manager-754d5cf94c-jqrwd\" (UID: \"17f6d164-a958-46a2-92a7-fc25ee889c44\") " pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.361545 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f6d164-a958-46a2-92a7-fc25ee889c44-serving-cert\") pod \"route-controller-manager-754d5cf94c-jqrwd\" (UID: \"17f6d164-a958-46a2-92a7-fc25ee889c44\") " pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.363246 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4xx4\" (UniqueName: \"kubernetes.io/projected/17f6d164-a958-46a2-92a7-fc25ee889c44-kube-api-access-t4xx4\") pod \"route-controller-manager-754d5cf94c-jqrwd\" (UID: \"17f6d164-a958-46a2-92a7-fc25ee889c44\") " pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.474822 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.517717 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv"] Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.521487 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ccc7f67b8-65qfv"] Jan 31 03:50:13 crc kubenswrapper[4827]: I0131 03:50:13.718292 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd"] Jan 31 03:50:14 crc kubenswrapper[4827]: I0131 03:50:14.121433 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="416bad8a-e265-443e-b753-5025262786fe" path="/var/lib/kubelet/pods/416bad8a-e265-443e-b753-5025262786fe/volumes" Jan 31 03:50:14 crc kubenswrapper[4827]: I0131 03:50:14.184972 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" event={"ID":"17f6d164-a958-46a2-92a7-fc25ee889c44","Type":"ContainerStarted","Data":"4ecf0b10b66bc2c4b99a10b22700637d7d0c71824e21ca42fd9cde2a95e22eab"} Jan 31 03:50:15 crc kubenswrapper[4827]: I0131 03:50:15.192405 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" event={"ID":"17f6d164-a958-46a2-92a7-fc25ee889c44","Type":"ContainerStarted","Data":"002cf65467832403943d71d780e39803fe521d3cadae98f75836dc9412848484"} Jan 31 03:50:15 crc kubenswrapper[4827]: I0131 03:50:15.193930 4827 generic.go:334] "Generic (PLEG): container finished" podID="d42cab75-5850-473d-9b36-1c55858fe5ee" containerID="c4c5ae3b34831b5776ce0a6a2da28118fe4674b99cf4978865d238028f3736ec" exitCode=0 Jan 31 03:50:15 crc kubenswrapper[4827]: I0131 03:50:15.193962 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" event={"ID":"d42cab75-5850-473d-9b36-1c55858fe5ee","Type":"ContainerDied","Data":"c4c5ae3b34831b5776ce0a6a2da28118fe4674b99cf4978865d238028f3736ec"} Jan 31 03:50:16 crc kubenswrapper[4827]: I0131 03:50:16.198834 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" Jan 31 03:50:16 crc kubenswrapper[4827]: I0131 03:50:16.204980 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" Jan 31 03:50:16 crc kubenswrapper[4827]: I0131 03:50:16.216817 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" podStartSLOduration=5.216801722 podStartE2EDuration="5.216801722s" podCreationTimestamp="2026-01-31 03:50:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:16.215719285 +0000 UTC m=+208.902799734" watchObservedRunningTime="2026-01-31 03:50:16.216801722 +0000 UTC m=+208.903882171" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.120109 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.169271 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c58db8454-vvsfg"] Jan 31 03:50:17 crc kubenswrapper[4827]: E0131 03:50:17.169646 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d42cab75-5850-473d-9b36-1c55858fe5ee" containerName="controller-manager" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.169677 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d42cab75-5850-473d-9b36-1c55858fe5ee" containerName="controller-manager" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.169879 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d42cab75-5850-473d-9b36-1c55858fe5ee" containerName="controller-manager" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.170468 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.185084 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c58db8454-vvsfg"] Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.200312 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d42cab75-5850-473d-9b36-1c55858fe5ee-serving-cert\") pod \"d42cab75-5850-473d-9b36-1c55858fe5ee\" (UID: \"d42cab75-5850-473d-9b36-1c55858fe5ee\") " Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.200405 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d42cab75-5850-473d-9b36-1c55858fe5ee-client-ca\") pod \"d42cab75-5850-473d-9b36-1c55858fe5ee\" (UID: \"d42cab75-5850-473d-9b36-1c55858fe5ee\") " Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.200506 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxzpn\" (UniqueName: \"kubernetes.io/projected/d42cab75-5850-473d-9b36-1c55858fe5ee-kube-api-access-nxzpn\") pod \"d42cab75-5850-473d-9b36-1c55858fe5ee\" (UID: \"d42cab75-5850-473d-9b36-1c55858fe5ee\") " Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.200543 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d42cab75-5850-473d-9b36-1c55858fe5ee-proxy-ca-bundles\") pod \"d42cab75-5850-473d-9b36-1c55858fe5ee\" (UID: \"d42cab75-5850-473d-9b36-1c55858fe5ee\") " Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.200579 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d42cab75-5850-473d-9b36-1c55858fe5ee-config\") pod \"d42cab75-5850-473d-9b36-1c55858fe5ee\" (UID: \"d42cab75-5850-473d-9b36-1c55858fe5ee\") " Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.201456 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d42cab75-5850-473d-9b36-1c55858fe5ee-client-ca" (OuterVolumeSpecName: "client-ca") pod "d42cab75-5850-473d-9b36-1c55858fe5ee" (UID: "d42cab75-5850-473d-9b36-1c55858fe5ee"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.201973 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d42cab75-5850-473d-9b36-1c55858fe5ee-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d42cab75-5850-473d-9b36-1c55858fe5ee" (UID: "d42cab75-5850-473d-9b36-1c55858fe5ee"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.202342 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d42cab75-5850-473d-9b36-1c55858fe5ee-config" (OuterVolumeSpecName: "config") pod "d42cab75-5850-473d-9b36-1c55858fe5ee" (UID: "d42cab75-5850-473d-9b36-1c55858fe5ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.207292 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d42cab75-5850-473d-9b36-1c55858fe5ee-kube-api-access-nxzpn" (OuterVolumeSpecName: "kube-api-access-nxzpn") pod "d42cab75-5850-473d-9b36-1c55858fe5ee" (UID: "d42cab75-5850-473d-9b36-1c55858fe5ee"). InnerVolumeSpecName "kube-api-access-nxzpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.210810 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.211064 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-849ff9796-h2t6h" event={"ID":"d42cab75-5850-473d-9b36-1c55858fe5ee","Type":"ContainerDied","Data":"3ebdcf5bc70010eb03d9b09dac29d9478d4e5cb6249b53e80941dee3b112bc09"} Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.211359 4827 scope.go:117] "RemoveContainer" containerID="c4c5ae3b34831b5776ce0a6a2da28118fe4674b99cf4978865d238028f3736ec" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.225965 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42cab75-5850-473d-9b36-1c55858fe5ee-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d42cab75-5850-473d-9b36-1c55858fe5ee" (UID: "d42cab75-5850-473d-9b36-1c55858fe5ee"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.302338 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6e9514e-e904-498c-be6e-2a6347ac313e-proxy-ca-bundles\") pod \"controller-manager-c58db8454-vvsfg\" (UID: \"e6e9514e-e904-498c-be6e-2a6347ac313e\") " pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.302835 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl7k2\" (UniqueName: \"kubernetes.io/projected/e6e9514e-e904-498c-be6e-2a6347ac313e-kube-api-access-zl7k2\") pod \"controller-manager-c58db8454-vvsfg\" (UID: \"e6e9514e-e904-498c-be6e-2a6347ac313e\") " pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.302867 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6e9514e-e904-498c-be6e-2a6347ac313e-serving-cert\") pod \"controller-manager-c58db8454-vvsfg\" (UID: \"e6e9514e-e904-498c-be6e-2a6347ac313e\") " pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.302984 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6e9514e-e904-498c-be6e-2a6347ac313e-client-ca\") pod \"controller-manager-c58db8454-vvsfg\" (UID: \"e6e9514e-e904-498c-be6e-2a6347ac313e\") " pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.303029 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6e9514e-e904-498c-be6e-2a6347ac313e-config\") pod \"controller-manager-c58db8454-vvsfg\" (UID: \"e6e9514e-e904-498c-be6e-2a6347ac313e\") " pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.303076 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d42cab75-5850-473d-9b36-1c55858fe5ee-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.303091 4827 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d42cab75-5850-473d-9b36-1c55858fe5ee-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.303102 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxzpn\" (UniqueName: \"kubernetes.io/projected/d42cab75-5850-473d-9b36-1c55858fe5ee-kube-api-access-nxzpn\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.303112 4827 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d42cab75-5850-473d-9b36-1c55858fe5ee-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.303125 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d42cab75-5850-473d-9b36-1c55858fe5ee-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.371628 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.371688 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.371733 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.372249 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50"} pod="openshift-machine-config-operator/machine-config-daemon-jxh94" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.372315 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" containerID="cri-o://00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50" gracePeriod=600 Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.404387 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6e9514e-e904-498c-be6e-2a6347ac313e-proxy-ca-bundles\") pod \"controller-manager-c58db8454-vvsfg\" (UID: \"e6e9514e-e904-498c-be6e-2a6347ac313e\") " pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.404471 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl7k2\" (UniqueName: \"kubernetes.io/projected/e6e9514e-e904-498c-be6e-2a6347ac313e-kube-api-access-zl7k2\") pod \"controller-manager-c58db8454-vvsfg\" (UID: \"e6e9514e-e904-498c-be6e-2a6347ac313e\") " pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.404501 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6e9514e-e904-498c-be6e-2a6347ac313e-serving-cert\") pod \"controller-manager-c58db8454-vvsfg\" (UID: \"e6e9514e-e904-498c-be6e-2a6347ac313e\") " pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.404571 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6e9514e-e904-498c-be6e-2a6347ac313e-client-ca\") pod \"controller-manager-c58db8454-vvsfg\" (UID: \"e6e9514e-e904-498c-be6e-2a6347ac313e\") " pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.404626 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6e9514e-e904-498c-be6e-2a6347ac313e-config\") pod \"controller-manager-c58db8454-vvsfg\" (UID: \"e6e9514e-e904-498c-be6e-2a6347ac313e\") " pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.405660 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6e9514e-e904-498c-be6e-2a6347ac313e-proxy-ca-bundles\") pod \"controller-manager-c58db8454-vvsfg\" (UID: \"e6e9514e-e904-498c-be6e-2a6347ac313e\") " pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.406432 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6e9514e-e904-498c-be6e-2a6347ac313e-config\") pod \"controller-manager-c58db8454-vvsfg\" (UID: \"e6e9514e-e904-498c-be6e-2a6347ac313e\") " pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.407719 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6e9514e-e904-498c-be6e-2a6347ac313e-client-ca\") pod \"controller-manager-c58db8454-vvsfg\" (UID: \"e6e9514e-e904-498c-be6e-2a6347ac313e\") " pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.410696 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6e9514e-e904-498c-be6e-2a6347ac313e-serving-cert\") pod \"controller-manager-c58db8454-vvsfg\" (UID: \"e6e9514e-e904-498c-be6e-2a6347ac313e\") " pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.426109 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl7k2\" (UniqueName: \"kubernetes.io/projected/e6e9514e-e904-498c-be6e-2a6347ac313e-kube-api-access-zl7k2\") pod \"controller-manager-c58db8454-vvsfg\" (UID: \"e6e9514e-e904-498c-be6e-2a6347ac313e\") " pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.490150 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.542244 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-849ff9796-h2t6h"] Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.545617 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-849ff9796-h2t6h"] Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.741997 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-sgzkm" Jan 31 03:50:17 crc kubenswrapper[4827]: I0131 03:50:17.745817 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c58db8454-vvsfg"] Jan 31 03:50:18 crc kubenswrapper[4827]: I0131 03:50:18.116458 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d42cab75-5850-473d-9b36-1c55858fe5ee" path="/var/lib/kubelet/pods/d42cab75-5850-473d-9b36-1c55858fe5ee/volumes" Jan 31 03:50:18 crc kubenswrapper[4827]: I0131 03:50:18.245546 4827 generic.go:334] "Generic (PLEG): container finished" podID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerID="00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50" exitCode=0 Jan 31 03:50:18 crc kubenswrapper[4827]: I0131 03:50:18.245991 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerDied","Data":"00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50"} Jan 31 03:50:18 crc kubenswrapper[4827]: I0131 03:50:18.252107 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lngw" event={"ID":"e357c738-a2f2-49a3-b122-5fe5ab45b919","Type":"ContainerStarted","Data":"1fb3b10337cb42607cc42f2506f9009dcf266e7d569d52e8986c9b2093dfd355"} Jan 31 03:50:18 crc kubenswrapper[4827]: I0131 03:50:18.258317 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" event={"ID":"e6e9514e-e904-498c-be6e-2a6347ac313e","Type":"ContainerStarted","Data":"73b8462e65bdc2e4a86ae45e4b34f397316c2398c3f3a8af7dd442f5cc218a08"} Jan 31 03:50:18 crc kubenswrapper[4827]: I0131 03:50:18.258364 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" event={"ID":"e6e9514e-e904-498c-be6e-2a6347ac313e","Type":"ContainerStarted","Data":"756e68afa33ce9b2299b461ec53f2de7e5381b84f210d8dd151e737558ea5ecf"} Jan 31 03:50:19 crc kubenswrapper[4827]: I0131 03:50:19.264377 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"7e027f81b11efb72a8912d312a7f93e437919ca268f3acdeab9e714aa0b8ebaf"} Jan 31 03:50:19 crc kubenswrapper[4827]: I0131 03:50:19.266336 4827 generic.go:334] "Generic (PLEG): container finished" podID="e357c738-a2f2-49a3-b122-5fe5ab45b919" containerID="1fb3b10337cb42607cc42f2506f9009dcf266e7d569d52e8986c9b2093dfd355" exitCode=0 Jan 31 03:50:19 crc kubenswrapper[4827]: I0131 03:50:19.266376 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lngw" event={"ID":"e357c738-a2f2-49a3-b122-5fe5ab45b919","Type":"ContainerDied","Data":"1fb3b10337cb42607cc42f2506f9009dcf266e7d569d52e8986c9b2093dfd355"} Jan 31 03:50:19 crc kubenswrapper[4827]: I0131 03:50:19.266694 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" Jan 31 03:50:19 crc kubenswrapper[4827]: I0131 03:50:19.272143 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" Jan 31 03:50:19 crc kubenswrapper[4827]: I0131 03:50:19.312304 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" podStartSLOduration=8.312286492 podStartE2EDuration="8.312286492s" podCreationTimestamp="2026-01-31 03:50:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:19.309153517 +0000 UTC m=+211.996233966" watchObservedRunningTime="2026-01-31 03:50:19.312286492 +0000 UTC m=+211.999366961" Jan 31 03:50:24 crc kubenswrapper[4827]: I0131 03:50:24.296303 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lngw" event={"ID":"e357c738-a2f2-49a3-b122-5fe5ab45b919","Type":"ContainerStarted","Data":"c55390b54817fdcfb6ec196e28943d5d6cde97684ff2a4b0670d517e2419076e"} Jan 31 03:50:24 crc kubenswrapper[4827]: I0131 03:50:24.297782 4827 generic.go:334] "Generic (PLEG): container finished" podID="36f2dbb1-6370-4a38-8702-edf89c8b4668" containerID="0639eb7d7ec757bcd7bc78db440ef5affeb5d605f50bceb630b406f331b5cf35" exitCode=0 Jan 31 03:50:24 crc kubenswrapper[4827]: I0131 03:50:24.297841 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g76wz" event={"ID":"36f2dbb1-6370-4a38-8702-edf89c8b4668","Type":"ContainerDied","Data":"0639eb7d7ec757bcd7bc78db440ef5affeb5d605f50bceb630b406f331b5cf35"} Jan 31 03:50:24 crc kubenswrapper[4827]: I0131 03:50:24.299705 4827 generic.go:334] "Generic (PLEG): container finished" podID="273683f4-0b94-44d7-83a2-b540f4d5d81d" containerID="bc00a4886d045871ef850cf86c95e287979073446bbf3fc804499798c48a09dd" exitCode=0 Jan 31 03:50:24 crc kubenswrapper[4827]: I0131 03:50:24.299760 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wr7t4" event={"ID":"273683f4-0b94-44d7-83a2-b540f4d5d81d","Type":"ContainerDied","Data":"bc00a4886d045871ef850cf86c95e287979073446bbf3fc804499798c48a09dd"} Jan 31 03:50:24 crc kubenswrapper[4827]: I0131 03:50:24.302265 4827 generic.go:334] "Generic (PLEG): container finished" podID="b53b07cf-d0d5-4774-89fe-89765537cc9b" containerID="a3b165593eca2c0b28e0f53f58266f9b90901ef98588e9b57aff0baddb04a7c5" exitCode=0 Jan 31 03:50:24 crc kubenswrapper[4827]: I0131 03:50:24.302319 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpjgt" event={"ID":"b53b07cf-d0d5-4774-89fe-89765537cc9b","Type":"ContainerDied","Data":"a3b165593eca2c0b28e0f53f58266f9b90901ef98588e9b57aff0baddb04a7c5"} Jan 31 03:50:24 crc kubenswrapper[4827]: I0131 03:50:24.304008 4827 generic.go:334] "Generic (PLEG): container finished" podID="94e2d804-29e9-4233-adda-45072b493f0f" containerID="2b5fdd748f53650685095620fb5c0df3695163fd333e3b17d94b67fc65c34f03" exitCode=0 Jan 31 03:50:24 crc kubenswrapper[4827]: I0131 03:50:24.304055 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4gn2" event={"ID":"94e2d804-29e9-4233-adda-45072b493f0f","Type":"ContainerDied","Data":"2b5fdd748f53650685095620fb5c0df3695163fd333e3b17d94b67fc65c34f03"} Jan 31 03:50:24 crc kubenswrapper[4827]: I0131 03:50:24.307660 4827 generic.go:334] "Generic (PLEG): container finished" podID="a4c93e4f-eac3-4794-a748-51adfd8b961c" containerID="cefb1f406a66db5d53501aee9ef5748c2f42b32ec3b198243876a8c5f0042e6c" exitCode=0 Jan 31 03:50:24 crc kubenswrapper[4827]: I0131 03:50:24.307717 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h54h5" event={"ID":"a4c93e4f-eac3-4794-a748-51adfd8b961c","Type":"ContainerDied","Data":"cefb1f406a66db5d53501aee9ef5748c2f42b32ec3b198243876a8c5f0042e6c"} Jan 31 03:50:24 crc kubenswrapper[4827]: I0131 03:50:24.309697 4827 generic.go:334] "Generic (PLEG): container finished" podID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" containerID="b9b891256c62e0025fbadd07438a88b7bc18f534e96ab3062dc77a87e28f0c6c" exitCode=0 Jan 31 03:50:24 crc kubenswrapper[4827]: I0131 03:50:24.309753 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rcw5" event={"ID":"062f8208-e13f-439f-bb1f-13b9c91c5ea3","Type":"ContainerDied","Data":"b9b891256c62e0025fbadd07438a88b7bc18f534e96ab3062dc77a87e28f0c6c"} Jan 31 03:50:24 crc kubenswrapper[4827]: I0131 03:50:24.311827 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2d85" event={"ID":"f3ede25d-6d79-44f7-a853-88b36723eb92","Type":"ContainerStarted","Data":"fddc59cd6ede0c737d469f0c68ce2f50ac6fdeafa31937a002dad751ea2796ff"} Jan 31 03:50:24 crc kubenswrapper[4827]: I0131 03:50:24.325110 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8lngw" podStartSLOduration=6.088760958 podStartE2EDuration="1m2.325088603s" podCreationTimestamp="2026-01-31 03:49:22 +0000 UTC" firstStartedPulling="2026-01-31 03:49:26.785136924 +0000 UTC m=+159.472217383" lastFinishedPulling="2026-01-31 03:50:23.021464579 +0000 UTC m=+215.708545028" observedRunningTime="2026-01-31 03:50:24.319817476 +0000 UTC m=+217.006897925" watchObservedRunningTime="2026-01-31 03:50:24.325088603 +0000 UTC m=+217.012169052" Jan 31 03:50:25 crc kubenswrapper[4827]: I0131 03:50:25.320656 4827 generic.go:334] "Generic (PLEG): container finished" podID="f3ede25d-6d79-44f7-a853-88b36723eb92" containerID="fddc59cd6ede0c737d469f0c68ce2f50ac6fdeafa31937a002dad751ea2796ff" exitCode=0 Jan 31 03:50:25 crc kubenswrapper[4827]: I0131 03:50:25.320933 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2d85" event={"ID":"f3ede25d-6d79-44f7-a853-88b36723eb92","Type":"ContainerDied","Data":"fddc59cd6ede0c737d469f0c68ce2f50ac6fdeafa31937a002dad751ea2796ff"} Jan 31 03:50:31 crc kubenswrapper[4827]: I0131 03:50:31.381570 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wr7t4" event={"ID":"273683f4-0b94-44d7-83a2-b540f4d5d81d","Type":"ContainerStarted","Data":"350e02648a79b461b11514c7f81aa2e8357a835f80ace2b965557e1bd998e97a"} Jan 31 03:50:31 crc kubenswrapper[4827]: I0131 03:50:31.415091 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wr7t4" podStartSLOduration=9.645571343 podStartE2EDuration="1m12.415005974s" podCreationTimestamp="2026-01-31 03:49:19 +0000 UTC" firstStartedPulling="2026-01-31 03:49:26.786981425 +0000 UTC m=+159.474061874" lastFinishedPulling="2026-01-31 03:50:29.556416016 +0000 UTC m=+222.243496505" observedRunningTime="2026-01-31 03:50:31.412138698 +0000 UTC m=+224.099219187" watchObservedRunningTime="2026-01-31 03:50:31.415005974 +0000 UTC m=+224.102086433" Jan 31 03:50:31 crc kubenswrapper[4827]: I0131 03:50:31.556346 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c58db8454-vvsfg"] Jan 31 03:50:31 crc kubenswrapper[4827]: I0131 03:50:31.557358 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" podUID="e6e9514e-e904-498c-be6e-2a6347ac313e" containerName="controller-manager" containerID="cri-o://73b8462e65bdc2e4a86ae45e4b34f397316c2398c3f3a8af7dd442f5cc218a08" gracePeriod=30 Jan 31 03:50:31 crc kubenswrapper[4827]: I0131 03:50:31.625828 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd"] Jan 31 03:50:31 crc kubenswrapper[4827]: I0131 03:50:31.626047 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" podUID="17f6d164-a958-46a2-92a7-fc25ee889c44" containerName="route-controller-manager" containerID="cri-o://002cf65467832403943d71d780e39803fe521d3cadae98f75836dc9412848484" gracePeriod=30 Jan 31 03:50:32 crc kubenswrapper[4827]: I0131 03:50:32.831162 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8lngw" Jan 31 03:50:32 crc kubenswrapper[4827]: I0131 03:50:32.831288 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8lngw" Jan 31 03:50:33 crc kubenswrapper[4827]: I0131 03:50:33.399373 4827 generic.go:334] "Generic (PLEG): container finished" podID="17f6d164-a958-46a2-92a7-fc25ee889c44" containerID="002cf65467832403943d71d780e39803fe521d3cadae98f75836dc9412848484" exitCode=0 Jan 31 03:50:33 crc kubenswrapper[4827]: I0131 03:50:33.399480 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" event={"ID":"17f6d164-a958-46a2-92a7-fc25ee889c44","Type":"ContainerDied","Data":"002cf65467832403943d71d780e39803fe521d3cadae98f75836dc9412848484"} Jan 31 03:50:33 crc kubenswrapper[4827]: I0131 03:50:33.401348 4827 generic.go:334] "Generic (PLEG): container finished" podID="e6e9514e-e904-498c-be6e-2a6347ac313e" containerID="73b8462e65bdc2e4a86ae45e4b34f397316c2398c3f3a8af7dd442f5cc218a08" exitCode=0 Jan 31 03:50:33 crc kubenswrapper[4827]: I0131 03:50:33.401382 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" event={"ID":"e6e9514e-e904-498c-be6e-2a6347ac313e","Type":"ContainerDied","Data":"73b8462e65bdc2e4a86ae45e4b34f397316c2398c3f3a8af7dd442f5cc218a08"} Jan 31 03:50:33 crc kubenswrapper[4827]: I0131 03:50:33.475671 4827 patch_prober.go:28] interesting pod/route-controller-manager-754d5cf94c-jqrwd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Jan 31 03:50:33 crc kubenswrapper[4827]: I0131 03:50:33.475754 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" podUID="17f6d164-a958-46a2-92a7-fc25ee889c44" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Jan 31 03:50:33 crc kubenswrapper[4827]: I0131 03:50:33.899209 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8lngw" Jan 31 03:50:33 crc kubenswrapper[4827]: I0131 03:50:33.977969 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8lngw" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.518383 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.567761 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45"] Jan 31 03:50:35 crc kubenswrapper[4827]: E0131 03:50:35.567999 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f6d164-a958-46a2-92a7-fc25ee889c44" containerName="route-controller-manager" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.568011 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f6d164-a958-46a2-92a7-fc25ee889c44" containerName="route-controller-manager" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.568159 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="17f6d164-a958-46a2-92a7-fc25ee889c44" containerName="route-controller-manager" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.568559 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.590479 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45"] Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.620686 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f6d164-a958-46a2-92a7-fc25ee889c44-serving-cert\") pod \"17f6d164-a958-46a2-92a7-fc25ee889c44\" (UID: \"17f6d164-a958-46a2-92a7-fc25ee889c44\") " Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.620866 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17f6d164-a958-46a2-92a7-fc25ee889c44-client-ca\") pod \"17f6d164-a958-46a2-92a7-fc25ee889c44\" (UID: \"17f6d164-a958-46a2-92a7-fc25ee889c44\") " Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.620961 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f6d164-a958-46a2-92a7-fc25ee889c44-config\") pod \"17f6d164-a958-46a2-92a7-fc25ee889c44\" (UID: \"17f6d164-a958-46a2-92a7-fc25ee889c44\") " Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.621032 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4xx4\" (UniqueName: \"kubernetes.io/projected/17f6d164-a958-46a2-92a7-fc25ee889c44-kube-api-access-t4xx4\") pod \"17f6d164-a958-46a2-92a7-fc25ee889c44\" (UID: \"17f6d164-a958-46a2-92a7-fc25ee889c44\") " Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.622109 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f6d164-a958-46a2-92a7-fc25ee889c44-config" (OuterVolumeSpecName: "config") pod "17f6d164-a958-46a2-92a7-fc25ee889c44" (UID: "17f6d164-a958-46a2-92a7-fc25ee889c44"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.622181 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f6d164-a958-46a2-92a7-fc25ee889c44-client-ca" (OuterVolumeSpecName: "client-ca") pod "17f6d164-a958-46a2-92a7-fc25ee889c44" (UID: "17f6d164-a958-46a2-92a7-fc25ee889c44"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.627222 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f6d164-a958-46a2-92a7-fc25ee889c44-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "17f6d164-a958-46a2-92a7-fc25ee889c44" (UID: "17f6d164-a958-46a2-92a7-fc25ee889c44"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.627313 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17f6d164-a958-46a2-92a7-fc25ee889c44-kube-api-access-t4xx4" (OuterVolumeSpecName: "kube-api-access-t4xx4") pod "17f6d164-a958-46a2-92a7-fc25ee889c44" (UID: "17f6d164-a958-46a2-92a7-fc25ee889c44"). InnerVolumeSpecName "kube-api-access-t4xx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.668538 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.724169 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffaaaeae-f432-49fd-8592-48b61fa39af6-config\") pod \"route-controller-manager-5c6b7dffdb-r2j45\" (UID: \"ffaaaeae-f432-49fd-8592-48b61fa39af6\") " pod="openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.724476 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffaaaeae-f432-49fd-8592-48b61fa39af6-serving-cert\") pod \"route-controller-manager-5c6b7dffdb-r2j45\" (UID: \"ffaaaeae-f432-49fd-8592-48b61fa39af6\") " pod="openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.724608 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffaaaeae-f432-49fd-8592-48b61fa39af6-client-ca\") pod \"route-controller-manager-5c6b7dffdb-r2j45\" (UID: \"ffaaaeae-f432-49fd-8592-48b61fa39af6\") " pod="openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.724738 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h5hf\" (UniqueName: \"kubernetes.io/projected/ffaaaeae-f432-49fd-8592-48b61fa39af6-kube-api-access-9h5hf\") pod \"route-controller-manager-5c6b7dffdb-r2j45\" (UID: \"ffaaaeae-f432-49fd-8592-48b61fa39af6\") " pod="openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.724935 4827 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17f6d164-a958-46a2-92a7-fc25ee889c44-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.725028 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f6d164-a958-46a2-92a7-fc25ee889c44-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.725112 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4xx4\" (UniqueName: \"kubernetes.io/projected/17f6d164-a958-46a2-92a7-fc25ee889c44-kube-api-access-t4xx4\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.725190 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f6d164-a958-46a2-92a7-fc25ee889c44-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.825774 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6e9514e-e904-498c-be6e-2a6347ac313e-proxy-ca-bundles\") pod \"e6e9514e-e904-498c-be6e-2a6347ac313e\" (UID: \"e6e9514e-e904-498c-be6e-2a6347ac313e\") " Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.826151 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6e9514e-e904-498c-be6e-2a6347ac313e-serving-cert\") pod \"e6e9514e-e904-498c-be6e-2a6347ac313e\" (UID: \"e6e9514e-e904-498c-be6e-2a6347ac313e\") " Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.826418 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6e9514e-e904-498c-be6e-2a6347ac313e-config\") pod \"e6e9514e-e904-498c-be6e-2a6347ac313e\" (UID: \"e6e9514e-e904-498c-be6e-2a6347ac313e\") " Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.826562 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl7k2\" (UniqueName: \"kubernetes.io/projected/e6e9514e-e904-498c-be6e-2a6347ac313e-kube-api-access-zl7k2\") pod \"e6e9514e-e904-498c-be6e-2a6347ac313e\" (UID: \"e6e9514e-e904-498c-be6e-2a6347ac313e\") " Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.826683 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6e9514e-e904-498c-be6e-2a6347ac313e-client-ca\") pod \"e6e9514e-e904-498c-be6e-2a6347ac313e\" (UID: \"e6e9514e-e904-498c-be6e-2a6347ac313e\") " Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.826963 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6e9514e-e904-498c-be6e-2a6347ac313e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e6e9514e-e904-498c-be6e-2a6347ac313e" (UID: "e6e9514e-e904-498c-be6e-2a6347ac313e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.826987 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffaaaeae-f432-49fd-8592-48b61fa39af6-config\") pod \"route-controller-manager-5c6b7dffdb-r2j45\" (UID: \"ffaaaeae-f432-49fd-8592-48b61fa39af6\") " pod="openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.827115 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffaaaeae-f432-49fd-8592-48b61fa39af6-serving-cert\") pod \"route-controller-manager-5c6b7dffdb-r2j45\" (UID: \"ffaaaeae-f432-49fd-8592-48b61fa39af6\") " pod="openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.827198 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffaaaeae-f432-49fd-8592-48b61fa39af6-client-ca\") pod \"route-controller-manager-5c6b7dffdb-r2j45\" (UID: \"ffaaaeae-f432-49fd-8592-48b61fa39af6\") " pod="openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.827296 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h5hf\" (UniqueName: \"kubernetes.io/projected/ffaaaeae-f432-49fd-8592-48b61fa39af6-kube-api-access-9h5hf\") pod \"route-controller-manager-5c6b7dffdb-r2j45\" (UID: \"ffaaaeae-f432-49fd-8592-48b61fa39af6\") " pod="openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.827423 4827 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6e9514e-e904-498c-be6e-2a6347ac313e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.828450 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffaaaeae-f432-49fd-8592-48b61fa39af6-client-ca\") pod \"route-controller-manager-5c6b7dffdb-r2j45\" (UID: \"ffaaaeae-f432-49fd-8592-48b61fa39af6\") " pod="openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.828754 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffaaaeae-f432-49fd-8592-48b61fa39af6-config\") pod \"route-controller-manager-5c6b7dffdb-r2j45\" (UID: \"ffaaaeae-f432-49fd-8592-48b61fa39af6\") " pod="openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.829318 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6e9514e-e904-498c-be6e-2a6347ac313e-client-ca" (OuterVolumeSpecName: "client-ca") pod "e6e9514e-e904-498c-be6e-2a6347ac313e" (UID: "e6e9514e-e904-498c-be6e-2a6347ac313e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.829987 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6e9514e-e904-498c-be6e-2a6347ac313e-config" (OuterVolumeSpecName: "config") pod "e6e9514e-e904-498c-be6e-2a6347ac313e" (UID: "e6e9514e-e904-498c-be6e-2a6347ac313e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.831441 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e9514e-e904-498c-be6e-2a6347ac313e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e6e9514e-e904-498c-be6e-2a6347ac313e" (UID: "e6e9514e-e904-498c-be6e-2a6347ac313e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.831699 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6e9514e-e904-498c-be6e-2a6347ac313e-kube-api-access-zl7k2" (OuterVolumeSpecName: "kube-api-access-zl7k2") pod "e6e9514e-e904-498c-be6e-2a6347ac313e" (UID: "e6e9514e-e904-498c-be6e-2a6347ac313e"). InnerVolumeSpecName "kube-api-access-zl7k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.845608 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffaaaeae-f432-49fd-8592-48b61fa39af6-serving-cert\") pod \"route-controller-manager-5c6b7dffdb-r2j45\" (UID: \"ffaaaeae-f432-49fd-8592-48b61fa39af6\") " pod="openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.850685 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h5hf\" (UniqueName: \"kubernetes.io/projected/ffaaaeae-f432-49fd-8592-48b61fa39af6-kube-api-access-9h5hf\") pod \"route-controller-manager-5c6b7dffdb-r2j45\" (UID: \"ffaaaeae-f432-49fd-8592-48b61fa39af6\") " pod="openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.890916 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.934923 4827 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6e9514e-e904-498c-be6e-2a6347ac313e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.934989 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6e9514e-e904-498c-be6e-2a6347ac313e-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.935009 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl7k2\" (UniqueName: \"kubernetes.io/projected/e6e9514e-e904-498c-be6e-2a6347ac313e-kube-api-access-zl7k2\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:35 crc kubenswrapper[4827]: I0131 03:50:35.935023 4827 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6e9514e-e904-498c-be6e-2a6347ac313e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.169455 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45"] Jan 31 03:50:36 crc kubenswrapper[4827]: W0131 03:50:36.179951 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffaaaeae_f432_49fd_8592_48b61fa39af6.slice/crio-5514d6d5421d5d9042863722a1a11ab9f0fe683fed161221919a5163ff2c44a1 WatchSource:0}: Error finding container 5514d6d5421d5d9042863722a1a11ab9f0fe683fed161221919a5163ff2c44a1: Status 404 returned error can't find the container with id 5514d6d5421d5d9042863722a1a11ab9f0fe683fed161221919a5163ff2c44a1 Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.430299 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" event={"ID":"17f6d164-a958-46a2-92a7-fc25ee889c44","Type":"ContainerDied","Data":"4ecf0b10b66bc2c4b99a10b22700637d7d0c71824e21ca42fd9cde2a95e22eab"} Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.430711 4827 scope.go:117] "RemoveContainer" containerID="002cf65467832403943d71d780e39803fe521d3cadae98f75836dc9412848484" Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.430322 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd" Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.433811 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rcw5" event={"ID":"062f8208-e13f-439f-bb1f-13b9c91c5ea3","Type":"ContainerStarted","Data":"bd82eeec93b9ab3369af6b946f8f1e4d2c95c66a429efd50cc66eac7ffbb41b0"} Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.437493 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45" event={"ID":"ffaaaeae-f432-49fd-8592-48b61fa39af6","Type":"ContainerStarted","Data":"8da64993d894b42f84c4529aa5d125d6db7afaf1d5ab31805cb361bee195220d"} Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.437534 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45" event={"ID":"ffaaaeae-f432-49fd-8592-48b61fa39af6","Type":"ContainerStarted","Data":"5514d6d5421d5d9042863722a1a11ab9f0fe683fed161221919a5163ff2c44a1"} Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.439979 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45" Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.447737 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g76wz" event={"ID":"36f2dbb1-6370-4a38-8702-edf89c8b4668","Type":"ContainerStarted","Data":"f18aea0bc5fef6afca2778b69d4c12ac7450de98d4c815c8ebee7c4750caeae8"} Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.454646 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.454984 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c58db8454-vvsfg" event={"ID":"e6e9514e-e904-498c-be6e-2a6347ac313e","Type":"ContainerDied","Data":"756e68afa33ce9b2299b461ec53f2de7e5381b84f210d8dd151e737558ea5ecf"} Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.455075 4827 scope.go:117] "RemoveContainer" containerID="73b8462e65bdc2e4a86ae45e4b34f397316c2398c3f3a8af7dd442f5cc218a08" Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.466096 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8rcw5" podStartSLOduration=6.9225295110000005 podStartE2EDuration="1m17.466067701s" podCreationTimestamp="2026-01-31 03:49:19 +0000 UTC" firstStartedPulling="2026-01-31 03:49:24.74164598 +0000 UTC m=+157.428726469" lastFinishedPulling="2026-01-31 03:50:35.28518417 +0000 UTC m=+227.972264659" observedRunningTime="2026-01-31 03:50:36.462970897 +0000 UTC m=+229.150051356" watchObservedRunningTime="2026-01-31 03:50:36.466067701 +0000 UTC m=+229.153148150" Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.466258 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpjgt" event={"ID":"b53b07cf-d0d5-4774-89fe-89765537cc9b","Type":"ContainerStarted","Data":"4b002fe259d37c8f59d3b86dfee22328d2570f86e9d6b05b608ba4da5601526a"} Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.471170 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h54h5" event={"ID":"a4c93e4f-eac3-4794-a748-51adfd8b961c","Type":"ContainerStarted","Data":"e8a7f5ce15813c0954ca71b668e53033e6c67721cea930e16b24f7ea94c5d62e"} Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.479255 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2d85" event={"ID":"f3ede25d-6d79-44f7-a853-88b36723eb92","Type":"ContainerStarted","Data":"882f692fcc4ed250c2f46e0c56ee5f94d7d10ee2e2eeed469ca7d76f287de9a9"} Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.482097 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4gn2" event={"ID":"94e2d804-29e9-4233-adda-45072b493f0f","Type":"ContainerStarted","Data":"0afc7bfb70922bc887e5ff833e67551cdd2857a274039c09cc505c582132cb2e"} Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.488811 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45" podStartSLOduration=5.488774834 podStartE2EDuration="5.488774834s" podCreationTimestamp="2026-01-31 03:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:36.483504667 +0000 UTC m=+229.170585116" watchObservedRunningTime="2026-01-31 03:50:36.488774834 +0000 UTC m=+229.175855283" Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.523982 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g76wz" podStartSLOduration=6.978624195 podStartE2EDuration="1m17.523958515s" podCreationTimestamp="2026-01-31 03:49:19 +0000 UTC" firstStartedPulling="2026-01-31 03:49:24.739772348 +0000 UTC m=+157.426852837" lastFinishedPulling="2026-01-31 03:50:35.285106698 +0000 UTC m=+227.972187157" observedRunningTime="2026-01-31 03:50:36.505177145 +0000 UTC m=+229.192257604" watchObservedRunningTime="2026-01-31 03:50:36.523958515 +0000 UTC m=+229.211038964" Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.525635 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd"] Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.528303 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-754d5cf94c-jqrwd"] Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.536225 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c58db8454-vvsfg"] Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.539713 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-c58db8454-vvsfg"] Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.557820 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h54h5" podStartSLOduration=7.066483298 podStartE2EDuration="1m17.557791342s" podCreationTimestamp="2026-01-31 03:49:19 +0000 UTC" firstStartedPulling="2026-01-31 03:49:24.739384145 +0000 UTC m=+157.426464604" lastFinishedPulling="2026-01-31 03:50:35.230692179 +0000 UTC m=+227.917772648" observedRunningTime="2026-01-31 03:50:36.553519248 +0000 UTC m=+229.240599707" watchObservedRunningTime="2026-01-31 03:50:36.557791342 +0000 UTC m=+229.244871791" Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.586682 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c2d85" podStartSLOduration=6.104665315 podStartE2EDuration="1m14.586658342s" podCreationTimestamp="2026-01-31 03:49:22 +0000 UTC" firstStartedPulling="2026-01-31 03:49:26.775043987 +0000 UTC m=+159.462124436" lastFinishedPulling="2026-01-31 03:50:35.257037014 +0000 UTC m=+227.944117463" observedRunningTime="2026-01-31 03:50:36.582611396 +0000 UTC m=+229.269691845" watchObservedRunningTime="2026-01-31 03:50:36.586658342 +0000 UTC m=+229.273738791" Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.605188 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jpjgt" podStartSLOduration=7.094856062 podStartE2EDuration="1m15.605157443s" podCreationTimestamp="2026-01-31 03:49:21 +0000 UTC" firstStartedPulling="2026-01-31 03:49:26.786361644 +0000 UTC m=+159.473442113" lastFinishedPulling="2026-01-31 03:50:35.296663025 +0000 UTC m=+227.983743494" observedRunningTime="2026-01-31 03:50:36.601848032 +0000 UTC m=+229.288928481" watchObservedRunningTime="2026-01-31 03:50:36.605157443 +0000 UTC m=+229.292237892" Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.623077 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r4gn2" podStartSLOduration=7.131436167 podStartE2EDuration="1m15.623056565s" podCreationTimestamp="2026-01-31 03:49:21 +0000 UTC" firstStartedPulling="2026-01-31 03:49:26.768541951 +0000 UTC m=+159.455622420" lastFinishedPulling="2026-01-31 03:50:35.260162349 +0000 UTC m=+227.947242818" observedRunningTime="2026-01-31 03:50:36.620626123 +0000 UTC m=+229.307706592" watchObservedRunningTime="2026-01-31 03:50:36.623056565 +0000 UTC m=+229.310137014" Jan 31 03:50:36 crc kubenswrapper[4827]: I0131 03:50:36.624271 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c6b7dffdb-r2j45" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.122183 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17f6d164-a958-46a2-92a7-fc25ee889c44" path="/var/lib/kubelet/pods/17f6d164-a958-46a2-92a7-fc25ee889c44/volumes" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.123822 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6e9514e-e904-498c-be6e-2a6347ac313e" path="/var/lib/kubelet/pods/e6e9514e-e904-498c-be6e-2a6347ac313e/volumes" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.448363 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm"] Jan 31 03:50:38 crc kubenswrapper[4827]: E0131 03:50:38.448633 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e9514e-e904-498c-be6e-2a6347ac313e" containerName="controller-manager" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.448652 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e9514e-e904-498c-be6e-2a6347ac313e" containerName="controller-manager" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.448791 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e9514e-e904-498c-be6e-2a6347ac313e" containerName="controller-manager" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.449333 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.454240 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.454234 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.454363 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.454451 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.454543 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.455099 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.470610 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.472998 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm"] Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.572781 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnc6f\" (UniqueName: \"kubernetes.io/projected/1aaa69e0-a867-4c15-b8dd-536240afb78f-kube-api-access-gnc6f\") pod \"controller-manager-5777dfbbbf-c8jcm\" (UID: \"1aaa69e0-a867-4c15-b8dd-536240afb78f\") " pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.572838 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aaa69e0-a867-4c15-b8dd-536240afb78f-config\") pod \"controller-manager-5777dfbbbf-c8jcm\" (UID: \"1aaa69e0-a867-4c15-b8dd-536240afb78f\") " pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.572871 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1aaa69e0-a867-4c15-b8dd-536240afb78f-client-ca\") pod \"controller-manager-5777dfbbbf-c8jcm\" (UID: \"1aaa69e0-a867-4c15-b8dd-536240afb78f\") " pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.572915 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1aaa69e0-a867-4c15-b8dd-536240afb78f-proxy-ca-bundles\") pod \"controller-manager-5777dfbbbf-c8jcm\" (UID: \"1aaa69e0-a867-4c15-b8dd-536240afb78f\") " pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.572946 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1aaa69e0-a867-4c15-b8dd-536240afb78f-serving-cert\") pod \"controller-manager-5777dfbbbf-c8jcm\" (UID: \"1aaa69e0-a867-4c15-b8dd-536240afb78f\") " pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.674313 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnc6f\" (UniqueName: \"kubernetes.io/projected/1aaa69e0-a867-4c15-b8dd-536240afb78f-kube-api-access-gnc6f\") pod \"controller-manager-5777dfbbbf-c8jcm\" (UID: \"1aaa69e0-a867-4c15-b8dd-536240afb78f\") " pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.674429 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aaa69e0-a867-4c15-b8dd-536240afb78f-config\") pod \"controller-manager-5777dfbbbf-c8jcm\" (UID: \"1aaa69e0-a867-4c15-b8dd-536240afb78f\") " pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.674498 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1aaa69e0-a867-4c15-b8dd-536240afb78f-client-ca\") pod \"controller-manager-5777dfbbbf-c8jcm\" (UID: \"1aaa69e0-a867-4c15-b8dd-536240afb78f\") " pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.674564 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1aaa69e0-a867-4c15-b8dd-536240afb78f-proxy-ca-bundles\") pod \"controller-manager-5777dfbbbf-c8jcm\" (UID: \"1aaa69e0-a867-4c15-b8dd-536240afb78f\") " pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.674651 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1aaa69e0-a867-4c15-b8dd-536240afb78f-serving-cert\") pod \"controller-manager-5777dfbbbf-c8jcm\" (UID: \"1aaa69e0-a867-4c15-b8dd-536240afb78f\") " pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.676609 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1aaa69e0-a867-4c15-b8dd-536240afb78f-client-ca\") pod \"controller-manager-5777dfbbbf-c8jcm\" (UID: \"1aaa69e0-a867-4c15-b8dd-536240afb78f\") " pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.677539 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aaa69e0-a867-4c15-b8dd-536240afb78f-config\") pod \"controller-manager-5777dfbbbf-c8jcm\" (UID: \"1aaa69e0-a867-4c15-b8dd-536240afb78f\") " pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.678432 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1aaa69e0-a867-4c15-b8dd-536240afb78f-proxy-ca-bundles\") pod \"controller-manager-5777dfbbbf-c8jcm\" (UID: \"1aaa69e0-a867-4c15-b8dd-536240afb78f\") " pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.693236 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1aaa69e0-a867-4c15-b8dd-536240afb78f-serving-cert\") pod \"controller-manager-5777dfbbbf-c8jcm\" (UID: \"1aaa69e0-a867-4c15-b8dd-536240afb78f\") " pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.722015 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnc6f\" (UniqueName: \"kubernetes.io/projected/1aaa69e0-a867-4c15-b8dd-536240afb78f-kube-api-access-gnc6f\") pod \"controller-manager-5777dfbbbf-c8jcm\" (UID: \"1aaa69e0-a867-4c15-b8dd-536240afb78f\") " pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.765212 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" Jan 31 03:50:38 crc kubenswrapper[4827]: I0131 03:50:38.808487 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qtqj4"] Jan 31 03:50:39 crc kubenswrapper[4827]: I0131 03:50:39.315681 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm"] Jan 31 03:50:39 crc kubenswrapper[4827]: W0131 03:50:39.324374 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1aaa69e0_a867_4c15_b8dd_536240afb78f.slice/crio-dbf31c3adeb7260c02631147cda6d5cb9d1e6dd7d6635510c14246b33b0a0ac8 WatchSource:0}: Error finding container dbf31c3adeb7260c02631147cda6d5cb9d1e6dd7d6635510c14246b33b0a0ac8: Status 404 returned error can't find the container with id dbf31c3adeb7260c02631147cda6d5cb9d1e6dd7d6635510c14246b33b0a0ac8 Jan 31 03:50:39 crc kubenswrapper[4827]: I0131 03:50:39.437092 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g76wz" Jan 31 03:50:39 crc kubenswrapper[4827]: I0131 03:50:39.438041 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g76wz" Jan 31 03:50:39 crc kubenswrapper[4827]: I0131 03:50:39.508943 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g76wz" Jan 31 03:50:39 crc kubenswrapper[4827]: I0131 03:50:39.509250 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" event={"ID":"1aaa69e0-a867-4c15-b8dd-536240afb78f","Type":"ContainerStarted","Data":"dbf31c3adeb7260c02631147cda6d5cb9d1e6dd7d6635510c14246b33b0a0ac8"} Jan 31 03:50:39 crc kubenswrapper[4827]: I0131 03:50:39.895515 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8rcw5" Jan 31 03:50:39 crc kubenswrapper[4827]: I0131 03:50:39.895610 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8rcw5" Jan 31 03:50:39 crc kubenswrapper[4827]: I0131 03:50:39.959107 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8rcw5" Jan 31 03:50:40 crc kubenswrapper[4827]: I0131 03:50:40.099256 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h54h5" Jan 31 03:50:40 crc kubenswrapper[4827]: I0131 03:50:40.099519 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h54h5" Jan 31 03:50:40 crc kubenswrapper[4827]: I0131 03:50:40.169268 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h54h5" Jan 31 03:50:40 crc kubenswrapper[4827]: I0131 03:50:40.291266 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wr7t4" Jan 31 03:50:40 crc kubenswrapper[4827]: I0131 03:50:40.291729 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wr7t4" Jan 31 03:50:40 crc kubenswrapper[4827]: I0131 03:50:40.344780 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wr7t4" Jan 31 03:50:40 crc kubenswrapper[4827]: I0131 03:50:40.516950 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" event={"ID":"1aaa69e0-a867-4c15-b8dd-536240afb78f","Type":"ContainerStarted","Data":"44729d5d5fdd883d7741ae5e495a1732aef5bba1da75a6366fd76e507f46ecf6"} Jan 31 03:50:40 crc kubenswrapper[4827]: I0131 03:50:40.555682 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h54h5" Jan 31 03:50:40 crc kubenswrapper[4827]: I0131 03:50:40.562531 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g76wz" Jan 31 03:50:40 crc kubenswrapper[4827]: I0131 03:50:40.567024 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wr7t4" Jan 31 03:50:40 crc kubenswrapper[4827]: I0131 03:50:40.568176 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8rcw5" Jan 31 03:50:41 crc kubenswrapper[4827]: I0131 03:50:41.522848 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" Jan 31 03:50:41 crc kubenswrapper[4827]: I0131 03:50:41.528730 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" Jan 31 03:50:41 crc kubenswrapper[4827]: I0131 03:50:41.563285 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5777dfbbbf-c8jcm" podStartSLOduration=10.563267097 podStartE2EDuration="10.563267097s" podCreationTimestamp="2026-01-31 03:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:41.543377549 +0000 UTC m=+234.230457998" watchObservedRunningTime="2026-01-31 03:50:41.563267097 +0000 UTC m=+234.250347546" Jan 31 03:50:41 crc kubenswrapper[4827]: I0131 03:50:41.864969 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jpjgt" Jan 31 03:50:41 crc kubenswrapper[4827]: I0131 03:50:41.865108 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jpjgt" Jan 31 03:50:41 crc kubenswrapper[4827]: I0131 03:50:41.911124 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jpjgt" Jan 31 03:50:42 crc kubenswrapper[4827]: I0131 03:50:42.215754 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r4gn2" Jan 31 03:50:42 crc kubenswrapper[4827]: I0131 03:50:42.215826 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r4gn2" Jan 31 03:50:42 crc kubenswrapper[4827]: I0131 03:50:42.269136 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r4gn2" Jan 31 03:50:42 crc kubenswrapper[4827]: I0131 03:50:42.582033 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r4gn2" Jan 31 03:50:42 crc kubenswrapper[4827]: I0131 03:50:42.588484 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jpjgt" Jan 31 03:50:42 crc kubenswrapper[4827]: I0131 03:50:42.684546 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8rcw5"] Jan 31 03:50:42 crc kubenswrapper[4827]: I0131 03:50:42.684926 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8rcw5" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" containerName="registry-server" containerID="cri-o://bd82eeec93b9ab3369af6b946f8f1e4d2c95c66a429efd50cc66eac7ffbb41b0" gracePeriod=2 Jan 31 03:50:42 crc kubenswrapper[4827]: I0131 03:50:42.880693 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wr7t4"] Jan 31 03:50:42 crc kubenswrapper[4827]: I0131 03:50:42.881345 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wr7t4" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" containerName="registry-server" containerID="cri-o://350e02648a79b461b11514c7f81aa2e8357a835f80ace2b965557e1bd998e97a" gracePeriod=2 Jan 31 03:50:43 crc kubenswrapper[4827]: I0131 03:50:43.248145 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c2d85" Jan 31 03:50:43 crc kubenswrapper[4827]: I0131 03:50:43.248220 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c2d85" Jan 31 03:50:43 crc kubenswrapper[4827]: I0131 03:50:43.304311 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c2d85" Jan 31 03:50:43 crc kubenswrapper[4827]: I0131 03:50:43.535331 4827 generic.go:334] "Generic (PLEG): container finished" podID="273683f4-0b94-44d7-83a2-b540f4d5d81d" containerID="350e02648a79b461b11514c7f81aa2e8357a835f80ace2b965557e1bd998e97a" exitCode=0 Jan 31 03:50:43 crc kubenswrapper[4827]: I0131 03:50:43.535457 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wr7t4" event={"ID":"273683f4-0b94-44d7-83a2-b540f4d5d81d","Type":"ContainerDied","Data":"350e02648a79b461b11514c7f81aa2e8357a835f80ace2b965557e1bd998e97a"} Jan 31 03:50:43 crc kubenswrapper[4827]: I0131 03:50:43.537767 4827 generic.go:334] "Generic (PLEG): container finished" podID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" containerID="bd82eeec93b9ab3369af6b946f8f1e4d2c95c66a429efd50cc66eac7ffbb41b0" exitCode=0 Jan 31 03:50:43 crc kubenswrapper[4827]: I0131 03:50:43.537926 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rcw5" event={"ID":"062f8208-e13f-439f-bb1f-13b9c91c5ea3","Type":"ContainerDied","Data":"bd82eeec93b9ab3369af6b946f8f1e4d2c95c66a429efd50cc66eac7ffbb41b0"} Jan 31 03:50:43 crc kubenswrapper[4827]: I0131 03:50:43.585571 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c2d85" Jan 31 03:50:43 crc kubenswrapper[4827]: I0131 03:50:43.963871 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rcw5" Jan 31 03:50:43 crc kubenswrapper[4827]: I0131 03:50:43.998746 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wr7t4" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.049920 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/273683f4-0b94-44d7-83a2-b540f4d5d81d-utilities\") pod \"273683f4-0b94-44d7-83a2-b540f4d5d81d\" (UID: \"273683f4-0b94-44d7-83a2-b540f4d5d81d\") " Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.050260 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkwk4\" (UniqueName: \"kubernetes.io/projected/062f8208-e13f-439f-bb1f-13b9c91c5ea3-kube-api-access-vkwk4\") pod \"062f8208-e13f-439f-bb1f-13b9c91c5ea3\" (UID: \"062f8208-e13f-439f-bb1f-13b9c91c5ea3\") " Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.050465 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssk2f\" (UniqueName: \"kubernetes.io/projected/273683f4-0b94-44d7-83a2-b540f4d5d81d-kube-api-access-ssk2f\") pod \"273683f4-0b94-44d7-83a2-b540f4d5d81d\" (UID: \"273683f4-0b94-44d7-83a2-b540f4d5d81d\") " Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.051187 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/062f8208-e13f-439f-bb1f-13b9c91c5ea3-utilities\") pod \"062f8208-e13f-439f-bb1f-13b9c91c5ea3\" (UID: \"062f8208-e13f-439f-bb1f-13b9c91c5ea3\") " Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.051325 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/273683f4-0b94-44d7-83a2-b540f4d5d81d-catalog-content\") pod \"273683f4-0b94-44d7-83a2-b540f4d5d81d\" (UID: \"273683f4-0b94-44d7-83a2-b540f4d5d81d\") " Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.051459 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/062f8208-e13f-439f-bb1f-13b9c91c5ea3-catalog-content\") pod \"062f8208-e13f-439f-bb1f-13b9c91c5ea3\" (UID: \"062f8208-e13f-439f-bb1f-13b9c91c5ea3\") " Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.050980 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/273683f4-0b94-44d7-83a2-b540f4d5d81d-utilities" (OuterVolumeSpecName: "utilities") pod "273683f4-0b94-44d7-83a2-b540f4d5d81d" (UID: "273683f4-0b94-44d7-83a2-b540f4d5d81d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.052089 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/062f8208-e13f-439f-bb1f-13b9c91c5ea3-utilities" (OuterVolumeSpecName: "utilities") pod "062f8208-e13f-439f-bb1f-13b9c91c5ea3" (UID: "062f8208-e13f-439f-bb1f-13b9c91c5ea3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.056567 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/273683f4-0b94-44d7-83a2-b540f4d5d81d-kube-api-access-ssk2f" (OuterVolumeSpecName: "kube-api-access-ssk2f") pod "273683f4-0b94-44d7-83a2-b540f4d5d81d" (UID: "273683f4-0b94-44d7-83a2-b540f4d5d81d"). InnerVolumeSpecName "kube-api-access-ssk2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.058031 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/062f8208-e13f-439f-bb1f-13b9c91c5ea3-kube-api-access-vkwk4" (OuterVolumeSpecName: "kube-api-access-vkwk4") pod "062f8208-e13f-439f-bb1f-13b9c91c5ea3" (UID: "062f8208-e13f-439f-bb1f-13b9c91c5ea3"). InnerVolumeSpecName "kube-api-access-vkwk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.117512 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/062f8208-e13f-439f-bb1f-13b9c91c5ea3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "062f8208-e13f-439f-bb1f-13b9c91c5ea3" (UID: "062f8208-e13f-439f-bb1f-13b9c91c5ea3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.117723 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/273683f4-0b94-44d7-83a2-b540f4d5d81d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "273683f4-0b94-44d7-83a2-b540f4d5d81d" (UID: "273683f4-0b94-44d7-83a2-b540f4d5d81d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.152601 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssk2f\" (UniqueName: \"kubernetes.io/projected/273683f4-0b94-44d7-83a2-b540f4d5d81d-kube-api-access-ssk2f\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.152624 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/062f8208-e13f-439f-bb1f-13b9c91c5ea3-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.152633 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/273683f4-0b94-44d7-83a2-b540f4d5d81d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.152641 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/062f8208-e13f-439f-bb1f-13b9c91c5ea3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.152649 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/273683f4-0b94-44d7-83a2-b540f4d5d81d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.152657 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkwk4\" (UniqueName: \"kubernetes.io/projected/062f8208-e13f-439f-bb1f-13b9c91c5ea3-kube-api-access-vkwk4\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.397697 4827 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 03:50:44 crc kubenswrapper[4827]: E0131 03:50:44.398279 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" containerName="registry-server" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.398396 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" containerName="registry-server" Jan 31 03:50:44 crc kubenswrapper[4827]: E0131 03:50:44.398498 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" containerName="registry-server" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.398577 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" containerName="registry-server" Jan 31 03:50:44 crc kubenswrapper[4827]: E0131 03:50:44.398665 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" containerName="extract-utilities" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.398790 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" containerName="extract-utilities" Jan 31 03:50:44 crc kubenswrapper[4827]: E0131 03:50:44.398896 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" containerName="extract-content" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.399083 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" containerName="extract-content" Jan 31 03:50:44 crc kubenswrapper[4827]: E0131 03:50:44.399177 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" containerName="extract-utilities" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.399256 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" containerName="extract-utilities" Jan 31 03:50:44 crc kubenswrapper[4827]: E0131 03:50:44.399338 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" containerName="extract-content" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.399474 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" containerName="extract-content" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.399672 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" containerName="registry-server" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.399773 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" containerName="registry-server" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.400237 4827 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.400440 4827 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.400372 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.400670 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9" gracePeriod=15 Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.400695 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626" gracePeriod=15 Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.400708 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874" gracePeriod=15 Jan 31 03:50:44 crc kubenswrapper[4827]: E0131 03:50:44.401295 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.401325 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 03:50:44 crc kubenswrapper[4827]: E0131 03:50:44.401345 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.401354 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 03:50:44 crc kubenswrapper[4827]: E0131 03:50:44.401366 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.401375 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 03:50:44 crc kubenswrapper[4827]: E0131 03:50:44.401387 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.401394 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 03:50:44 crc kubenswrapper[4827]: E0131 03:50:44.401405 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.401413 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.400760 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46" gracePeriod=15 Jan 31 03:50:44 crc kubenswrapper[4827]: E0131 03:50:44.401426 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.401519 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 03:50:44 crc kubenswrapper[4827]: E0131 03:50:44.401546 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.401554 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.400751 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1" gracePeriod=15 Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.401731 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.401743 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.401751 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.401761 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.401770 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.401779 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.412332 4827 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.451731 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.457797 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.457870 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.457922 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.457953 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.457995 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.458018 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.458042 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.458202 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.548205 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rcw5" event={"ID":"062f8208-e13f-439f-bb1f-13b9c91c5ea3","Type":"ContainerDied","Data":"e147c7c68c6fdeb1216d39a488d737fdebc6bb3e183b28c77ba1e3969dc2b118"} Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.548254 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rcw5" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.548744 4827 scope.go:117] "RemoveContainer" containerID="bd82eeec93b9ab3369af6b946f8f1e4d2c95c66a429efd50cc66eac7ffbb41b0" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.549858 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.550472 4827 status_manager.go:851] "Failed to get status for pod" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" pod="openshift-marketplace/certified-operators-8rcw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8rcw5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.552184 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.553817 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.554535 4827 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874" exitCode=2 Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.558013 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wr7t4" event={"ID":"273683f4-0b94-44d7-83a2-b540f4d5d81d","Type":"ContainerDied","Data":"781407d424e12fe73595ad6f760eed4f0ca282f96b8a0ff6c8ff99a0816ae35c"} Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.558106 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wr7t4" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.558960 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.559354 4827 status_manager.go:851] "Failed to get status for pod" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" pod="openshift-marketplace/certified-operators-8rcw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8rcw5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.559577 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.559660 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.559668 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.559741 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.559781 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.559783 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.559739 4827 status_manager.go:851] "Failed to get status for pod" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" pod="openshift-marketplace/community-operators-wr7t4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wr7t4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.559812 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.559896 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.559876 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.559933 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.559916 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.559980 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.560007 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.560046 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.560083 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.560159 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.569066 4827 status_manager.go:851] "Failed to get status for pod" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" pod="openshift-marketplace/community-operators-wr7t4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wr7t4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.570416 4827 status_manager.go:851] "Failed to get status for pod" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" pod="openshift-marketplace/certified-operators-8rcw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8rcw5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.571188 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.571796 4827 status_manager.go:851] "Failed to get status for pod" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" pod="openshift-marketplace/certified-operators-8rcw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8rcw5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.572152 4827 status_manager.go:851] "Failed to get status for pod" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" pod="openshift-marketplace/community-operators-wr7t4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wr7t4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.572525 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.578346 4827 scope.go:117] "RemoveContainer" containerID="b9b891256c62e0025fbadd07438a88b7bc18f534e96ab3062dc77a87e28f0c6c" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.610840 4827 scope.go:117] "RemoveContainer" containerID="4846de233bd7ed84754fd62623000baf8673408027d4e1acbd2cfd0821561fcf" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.636496 4827 scope.go:117] "RemoveContainer" containerID="350e02648a79b461b11514c7f81aa2e8357a835f80ace2b965557e1bd998e97a" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.656595 4827 scope.go:117] "RemoveContainer" containerID="bc00a4886d045871ef850cf86c95e287979073446bbf3fc804499798c48a09dd" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.678164 4827 scope.go:117] "RemoveContainer" containerID="73970bda2501806f90281e64e2932d9cc9a3e1d5dd25e3e6318d80c8059be53d" Jan 31 03:50:44 crc kubenswrapper[4827]: I0131 03:50:44.748600 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:50:44 crc kubenswrapper[4827]: W0131 03:50:44.771831 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-644bf5ebf7d6ec44196a4afaac86b3bd5fbd4909f5a3c3bde7fb660fce5f9e1a WatchSource:0}: Error finding container 644bf5ebf7d6ec44196a4afaac86b3bd5fbd4909f5a3c3bde7fb660fce5f9e1a: Status 404 returned error can't find the container with id 644bf5ebf7d6ec44196a4afaac86b3bd5fbd4909f5a3c3bde7fb660fce5f9e1a Jan 31 03:50:44 crc kubenswrapper[4827]: E0131 03:50:44.775161 4827 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fb45272180fd2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 03:50:44.774391762 +0000 UTC m=+237.461472221,LastTimestamp:2026-01-31 03:50:44.774391762 +0000 UTC m=+237.461472221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 03:50:45 crc kubenswrapper[4827]: I0131 03:50:45.567833 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 03:50:45 crc kubenswrapper[4827]: I0131 03:50:45.568993 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 03:50:45 crc kubenswrapper[4827]: I0131 03:50:45.570107 4827 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46" exitCode=0 Jan 31 03:50:45 crc kubenswrapper[4827]: I0131 03:50:45.570142 4827 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1" exitCode=0 Jan 31 03:50:45 crc kubenswrapper[4827]: I0131 03:50:45.570154 4827 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626" exitCode=0 Jan 31 03:50:45 crc kubenswrapper[4827]: I0131 03:50:45.570275 4827 scope.go:117] "RemoveContainer" containerID="cfc1fe8805b7b20bc4e718ed74d4c2f265e69ed3fdc328c0f1da81450bb78bde" Jan 31 03:50:45 crc kubenswrapper[4827]: I0131 03:50:45.581479 4827 generic.go:334] "Generic (PLEG): container finished" podID="f1f631cd-2800-402c-9fe1-06af2bc620fd" containerID="375f29f668d0ce9011a44f9645a5a47c6754ececcca7b5e6f78e7d2eac5a1160" exitCode=0 Jan 31 03:50:45 crc kubenswrapper[4827]: I0131 03:50:45.581692 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f1f631cd-2800-402c-9fe1-06af2bc620fd","Type":"ContainerDied","Data":"375f29f668d0ce9011a44f9645a5a47c6754ececcca7b5e6f78e7d2eac5a1160"} Jan 31 03:50:45 crc kubenswrapper[4827]: I0131 03:50:45.583006 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:45 crc kubenswrapper[4827]: I0131 03:50:45.583300 4827 status_manager.go:851] "Failed to get status for pod" podUID="f1f631cd-2800-402c-9fe1-06af2bc620fd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:45 crc kubenswrapper[4827]: I0131 03:50:45.583552 4827 status_manager.go:851] "Failed to get status for pod" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" pod="openshift-marketplace/certified-operators-8rcw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8rcw5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:45 crc kubenswrapper[4827]: I0131 03:50:45.583904 4827 status_manager.go:851] "Failed to get status for pod" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" pod="openshift-marketplace/community-operators-wr7t4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wr7t4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:45 crc kubenswrapper[4827]: I0131 03:50:45.585719 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"644bf5ebf7d6ec44196a4afaac86b3bd5fbd4909f5a3c3bde7fb660fce5f9e1a"} Jan 31 03:50:46 crc kubenswrapper[4827]: I0131 03:50:46.597197 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"88640011fd65502204683d77938b8930922541ccaacc8cbd7bc3e72ebcc58ee0"} Jan 31 03:50:46 crc kubenswrapper[4827]: I0131 03:50:46.598182 4827 status_manager.go:851] "Failed to get status for pod" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" pod="openshift-marketplace/certified-operators-8rcw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8rcw5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:46 crc kubenswrapper[4827]: I0131 03:50:46.598530 4827 status_manager.go:851] "Failed to get status for pod" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" pod="openshift-marketplace/community-operators-wr7t4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wr7t4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:46 crc kubenswrapper[4827]: I0131 03:50:46.598993 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:46 crc kubenswrapper[4827]: I0131 03:50:46.599457 4827 status_manager.go:851] "Failed to get status for pod" podUID="f1f631cd-2800-402c-9fe1-06af2bc620fd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:46 crc kubenswrapper[4827]: I0131 03:50:46.601629 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 03:50:46 crc kubenswrapper[4827]: I0131 03:50:46.864793 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 03:50:46 crc kubenswrapper[4827]: I0131 03:50:46.866184 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:50:46 crc kubenswrapper[4827]: I0131 03:50:46.866833 4827 status_manager.go:851] "Failed to get status for pod" podUID="f1f631cd-2800-402c-9fe1-06af2bc620fd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:46 crc kubenswrapper[4827]: I0131 03:50:46.867341 4827 status_manager.go:851] "Failed to get status for pod" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" pod="openshift-marketplace/certified-operators-8rcw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8rcw5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:46 crc kubenswrapper[4827]: I0131 03:50:46.867752 4827 status_manager.go:851] "Failed to get status for pod" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" pod="openshift-marketplace/community-operators-wr7t4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wr7t4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:46 crc kubenswrapper[4827]: I0131 03:50:46.868076 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:46 crc kubenswrapper[4827]: I0131 03:50:46.868380 4827 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:46 crc kubenswrapper[4827]: I0131 03:50:46.959841 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:50:46 crc kubenswrapper[4827]: I0131 03:50:46.960488 4827 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:46 crc kubenswrapper[4827]: I0131 03:50:46.961377 4827 status_manager.go:851] "Failed to get status for pod" podUID="f1f631cd-2800-402c-9fe1-06af2bc620fd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:46 crc kubenswrapper[4827]: I0131 03:50:46.962278 4827 status_manager.go:851] "Failed to get status for pod" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" pod="openshift-marketplace/certified-operators-8rcw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8rcw5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:46 crc kubenswrapper[4827]: I0131 03:50:46.962532 4827 status_manager.go:851] "Failed to get status for pod" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" pod="openshift-marketplace/community-operators-wr7t4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wr7t4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:46 crc kubenswrapper[4827]: I0131 03:50:46.962832 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.004931 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.005071 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.005090 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.005158 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.005194 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.005220 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.006084 4827 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.006106 4827 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.006118 4827 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.107469 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f1f631cd-2800-402c-9fe1-06af2bc620fd-var-lock\") pod \"f1f631cd-2800-402c-9fe1-06af2bc620fd\" (UID: \"f1f631cd-2800-402c-9fe1-06af2bc620fd\") " Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.107662 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1f631cd-2800-402c-9fe1-06af2bc620fd-var-lock" (OuterVolumeSpecName: "var-lock") pod "f1f631cd-2800-402c-9fe1-06af2bc620fd" (UID: "f1f631cd-2800-402c-9fe1-06af2bc620fd"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.107734 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1f631cd-2800-402c-9fe1-06af2bc620fd-kube-api-access\") pod \"f1f631cd-2800-402c-9fe1-06af2bc620fd\" (UID: \"f1f631cd-2800-402c-9fe1-06af2bc620fd\") " Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.107753 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1f631cd-2800-402c-9fe1-06af2bc620fd-kubelet-dir\") pod \"f1f631cd-2800-402c-9fe1-06af2bc620fd\" (UID: \"f1f631cd-2800-402c-9fe1-06af2bc620fd\") " Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.107930 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1f631cd-2800-402c-9fe1-06af2bc620fd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f1f631cd-2800-402c-9fe1-06af2bc620fd" (UID: "f1f631cd-2800-402c-9fe1-06af2bc620fd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.108170 4827 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1f631cd-2800-402c-9fe1-06af2bc620fd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.108207 4827 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f1f631cd-2800-402c-9fe1-06af2bc620fd-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.119261 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1f631cd-2800-402c-9fe1-06af2bc620fd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f1f631cd-2800-402c-9fe1-06af2bc620fd" (UID: "f1f631cd-2800-402c-9fe1-06af2bc620fd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:50:47 crc kubenswrapper[4827]: E0131 03:50:47.151093 4827 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" volumeName="registry-storage" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.210217 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1f631cd-2800-402c-9fe1-06af2bc620fd-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.618417 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.619701 4827 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9" exitCode=0 Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.619814 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.619819 4827 scope.go:117] "RemoveContainer" containerID="125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.623098 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f1f631cd-2800-402c-9fe1-06af2bc620fd","Type":"ContainerDied","Data":"e30c3ba2b5d51ac18429c4be6045ca325177669f29a2ba17c2bfcf022b8549a7"} Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.623177 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e30c3ba2b5d51ac18429c4be6045ca325177669f29a2ba17c2bfcf022b8549a7" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.623397 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.651553 4827 scope.go:117] "RemoveContainer" containerID="e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.654372 4827 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.654777 4827 status_manager.go:851] "Failed to get status for pod" podUID="f1f631cd-2800-402c-9fe1-06af2bc620fd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.655214 4827 status_manager.go:851] "Failed to get status for pod" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" pod="openshift-marketplace/certified-operators-8rcw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8rcw5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.655633 4827 status_manager.go:851] "Failed to get status for pod" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" pod="openshift-marketplace/community-operators-wr7t4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wr7t4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.656023 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.656597 4827 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.656867 4827 status_manager.go:851] "Failed to get status for pod" podUID="f1f631cd-2800-402c-9fe1-06af2bc620fd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.657239 4827 status_manager.go:851] "Failed to get status for pod" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" pod="openshift-marketplace/community-operators-wr7t4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wr7t4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.657655 4827 status_manager.go:851] "Failed to get status for pod" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" pod="openshift-marketplace/certified-operators-8rcw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8rcw5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.658000 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.678640 4827 scope.go:117] "RemoveContainer" containerID="6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.703127 4827 scope.go:117] "RemoveContainer" containerID="8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.730615 4827 scope.go:117] "RemoveContainer" containerID="4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.769213 4827 scope.go:117] "RemoveContainer" containerID="f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.803830 4827 scope.go:117] "RemoveContainer" containerID="125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46" Jan 31 03:50:47 crc kubenswrapper[4827]: E0131 03:50:47.804578 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\": container with ID starting with 125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46 not found: ID does not exist" containerID="125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.804628 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46"} err="failed to get container status \"125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\": rpc error: code = NotFound desc = could not find container \"125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46\": container with ID starting with 125b31c980495c37837eedbbbd38b795fd6e568a429da5c2914799306e7b3a46 not found: ID does not exist" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.804659 4827 scope.go:117] "RemoveContainer" containerID="e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1" Jan 31 03:50:47 crc kubenswrapper[4827]: E0131 03:50:47.805056 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\": container with ID starting with e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1 not found: ID does not exist" containerID="e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.805085 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1"} err="failed to get container status \"e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\": rpc error: code = NotFound desc = could not find container \"e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1\": container with ID starting with e3b2757893ba910c36404f09734ea47eee3ac5a8cfcc4c06ee4ed2ada48937b1 not found: ID does not exist" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.805105 4827 scope.go:117] "RemoveContainer" containerID="6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626" Jan 31 03:50:47 crc kubenswrapper[4827]: E0131 03:50:47.805403 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\": container with ID starting with 6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626 not found: ID does not exist" containerID="6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.805439 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626"} err="failed to get container status \"6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\": rpc error: code = NotFound desc = could not find container \"6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626\": container with ID starting with 6b86ba80bd25344b29df1636aa3d8cc4cecb0e5f9c0005b4896d3ee86cd80626 not found: ID does not exist" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.805463 4827 scope.go:117] "RemoveContainer" containerID="8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874" Jan 31 03:50:47 crc kubenswrapper[4827]: E0131 03:50:47.805953 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\": container with ID starting with 8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874 not found: ID does not exist" containerID="8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.805981 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874"} err="failed to get container status \"8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\": rpc error: code = NotFound desc = could not find container \"8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874\": container with ID starting with 8ae08b52cf4a0a6cc4589bbf0f0fbc8527ebac3455b90185928f6fa0b6c52874 not found: ID does not exist" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.805998 4827 scope.go:117] "RemoveContainer" containerID="4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9" Jan 31 03:50:47 crc kubenswrapper[4827]: E0131 03:50:47.806500 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\": container with ID starting with 4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9 not found: ID does not exist" containerID="4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.806526 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9"} err="failed to get container status \"4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\": rpc error: code = NotFound desc = could not find container \"4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9\": container with ID starting with 4146d39a1a819389ddaee9e11df66783f6b1b98ee5fb2a244d8ba9ff2ecc88f9 not found: ID does not exist" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.806545 4827 scope.go:117] "RemoveContainer" containerID="f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd" Jan 31 03:50:47 crc kubenswrapper[4827]: E0131 03:50:47.807065 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\": container with ID starting with f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd not found: ID does not exist" containerID="f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd" Jan 31 03:50:47 crc kubenswrapper[4827]: I0131 03:50:47.807097 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd"} err="failed to get container status \"f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\": rpc error: code = NotFound desc = could not find container \"f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd\": container with ID starting with f40eed75b4eac164f95a4e1a719fe3e1da60abf85d533da313ceeb6f4fb07efd not found: ID does not exist" Jan 31 03:50:48 crc kubenswrapper[4827]: I0131 03:50:48.117129 4827 status_manager.go:851] "Failed to get status for pod" podUID="f1f631cd-2800-402c-9fe1-06af2bc620fd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:48 crc kubenswrapper[4827]: I0131 03:50:48.117425 4827 status_manager.go:851] "Failed to get status for pod" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" pod="openshift-marketplace/certified-operators-8rcw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8rcw5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:48 crc kubenswrapper[4827]: I0131 03:50:48.117681 4827 status_manager.go:851] "Failed to get status for pod" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" pod="openshift-marketplace/community-operators-wr7t4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wr7t4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:48 crc kubenswrapper[4827]: I0131 03:50:48.117972 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:48 crc kubenswrapper[4827]: I0131 03:50:48.118275 4827 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:48 crc kubenswrapper[4827]: I0131 03:50:48.125692 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 31 03:50:48 crc kubenswrapper[4827]: E0131 03:50:48.985356 4827 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:48 crc kubenswrapper[4827]: E0131 03:50:48.986135 4827 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:48 crc kubenswrapper[4827]: E0131 03:50:48.986800 4827 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:48 crc kubenswrapper[4827]: E0131 03:50:48.987358 4827 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:48 crc kubenswrapper[4827]: E0131 03:50:48.988009 4827 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:48 crc kubenswrapper[4827]: I0131 03:50:48.988096 4827 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 31 03:50:48 crc kubenswrapper[4827]: E0131 03:50:48.988749 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="200ms" Jan 31 03:50:49 crc kubenswrapper[4827]: E0131 03:50:49.190097 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="400ms" Jan 31 03:50:49 crc kubenswrapper[4827]: E0131 03:50:49.504311 4827 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fb45272180fd2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 03:50:44.774391762 +0000 UTC m=+237.461472221,LastTimestamp:2026-01-31 03:50:44.774391762 +0000 UTC m=+237.461472221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 03:50:49 crc kubenswrapper[4827]: E0131 03:50:49.591287 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="800ms" Jan 31 03:50:50 crc kubenswrapper[4827]: E0131 03:50:50.392736 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="1.6s" Jan 31 03:50:51 crc kubenswrapper[4827]: E0131 03:50:51.995241 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="3.2s" Jan 31 03:50:55 crc kubenswrapper[4827]: E0131 03:50:55.196673 4827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="6.4s" Jan 31 03:50:57 crc kubenswrapper[4827]: I0131 03:50:57.693735 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 03:50:57 crc kubenswrapper[4827]: I0131 03:50:57.694589 4827 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5" exitCode=1 Jan 31 03:50:57 crc kubenswrapper[4827]: I0131 03:50:57.694671 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5"} Jan 31 03:50:57 crc kubenswrapper[4827]: I0131 03:50:57.695612 4827 scope.go:117] "RemoveContainer" containerID="fb9a8fcde75a532b1906ceea4704ddfecbffeb17456377272e5792f200ee67d5" Jan 31 03:50:57 crc kubenswrapper[4827]: I0131 03:50:57.695725 4827 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:57 crc kubenswrapper[4827]: I0131 03:50:57.696468 4827 status_manager.go:851] "Failed to get status for pod" podUID="f1f631cd-2800-402c-9fe1-06af2bc620fd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:57 crc kubenswrapper[4827]: I0131 03:50:57.697077 4827 status_manager.go:851] "Failed to get status for pod" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" pod="openshift-marketplace/community-operators-wr7t4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wr7t4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:57 crc kubenswrapper[4827]: I0131 03:50:57.697732 4827 status_manager.go:851] "Failed to get status for pod" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" pod="openshift-marketplace/certified-operators-8rcw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8rcw5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:57 crc kubenswrapper[4827]: I0131 03:50:57.698274 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:58 crc kubenswrapper[4827]: I0131 03:50:58.114912 4827 status_manager.go:851] "Failed to get status for pod" podUID="f1f631cd-2800-402c-9fe1-06af2bc620fd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:58 crc kubenswrapper[4827]: I0131 03:50:58.115809 4827 status_manager.go:851] "Failed to get status for pod" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" pod="openshift-marketplace/certified-operators-8rcw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8rcw5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:58 crc kubenswrapper[4827]: I0131 03:50:58.116741 4827 status_manager.go:851] "Failed to get status for pod" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" pod="openshift-marketplace/community-operators-wr7t4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wr7t4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:58 crc kubenswrapper[4827]: I0131 03:50:58.117395 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:58 crc kubenswrapper[4827]: I0131 03:50:58.117961 4827 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:58 crc kubenswrapper[4827]: I0131 03:50:58.704533 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 03:50:58 crc kubenswrapper[4827]: I0131 03:50:58.705095 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fd0541e3de129cebb4588618db0c3dc300767d3052044a6df57ebb62a6be6ca4"} Jan 31 03:50:58 crc kubenswrapper[4827]: I0131 03:50:58.706034 4827 status_manager.go:851] "Failed to get status for pod" podUID="f1f631cd-2800-402c-9fe1-06af2bc620fd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:58 crc kubenswrapper[4827]: I0131 03:50:58.706806 4827 status_manager.go:851] "Failed to get status for pod" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" pod="openshift-marketplace/certified-operators-8rcw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8rcw5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:58 crc kubenswrapper[4827]: I0131 03:50:58.707357 4827 status_manager.go:851] "Failed to get status for pod" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" pod="openshift-marketplace/community-operators-wr7t4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wr7t4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:58 crc kubenswrapper[4827]: I0131 03:50:58.708132 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:58 crc kubenswrapper[4827]: I0131 03:50:58.708634 4827 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:58 crc kubenswrapper[4827]: I0131 03:50:58.822432 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:50:58 crc kubenswrapper[4827]: I0131 03:50:58.822998 4827 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 31 03:50:58 crc kubenswrapper[4827]: I0131 03:50:58.823075 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 31 03:50:59 crc kubenswrapper[4827]: I0131 03:50:59.108973 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:50:59 crc kubenswrapper[4827]: I0131 03:50:59.110260 4827 status_manager.go:851] "Failed to get status for pod" podUID="f1f631cd-2800-402c-9fe1-06af2bc620fd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:59 crc kubenswrapper[4827]: I0131 03:50:59.111054 4827 status_manager.go:851] "Failed to get status for pod" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" pod="openshift-marketplace/certified-operators-8rcw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8rcw5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:59 crc kubenswrapper[4827]: I0131 03:50:59.111496 4827 status_manager.go:851] "Failed to get status for pod" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" pod="openshift-marketplace/community-operators-wr7t4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wr7t4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:59 crc kubenswrapper[4827]: I0131 03:50:59.112042 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:59 crc kubenswrapper[4827]: I0131 03:50:59.112690 4827 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:59 crc kubenswrapper[4827]: I0131 03:50:59.123598 4827 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4273172d-ec24-4540-85cb-efc58aed3421" Jan 31 03:50:59 crc kubenswrapper[4827]: I0131 03:50:59.123802 4827 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4273172d-ec24-4540-85cb-efc58aed3421" Jan 31 03:50:59 crc kubenswrapper[4827]: E0131 03:50:59.124716 4827 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:50:59 crc kubenswrapper[4827]: I0131 03:50:59.125599 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:50:59 crc kubenswrapper[4827]: E0131 03:50:59.505542 4827 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fb45272180fd2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 03:50:44.774391762 +0000 UTC m=+237.461472221,LastTimestamp:2026-01-31 03:50:44.774391762 +0000 UTC m=+237.461472221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 03:50:59 crc kubenswrapper[4827]: I0131 03:50:59.711801 4827 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="3161dfd9b84b53a35b80943ebf5fdf4a826d7554c41d5fa0fa010e26c2c7ba41" exitCode=0 Jan 31 03:50:59 crc kubenswrapper[4827]: I0131 03:50:59.711907 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"3161dfd9b84b53a35b80943ebf5fdf4a826d7554c41d5fa0fa010e26c2c7ba41"} Jan 31 03:50:59 crc kubenswrapper[4827]: I0131 03:50:59.711948 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e886200deb5910bf251c31f02b2ff9927d84ada222143dc59ab406dae33b4bf1"} Jan 31 03:50:59 crc kubenswrapper[4827]: I0131 03:50:59.712212 4827 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4273172d-ec24-4540-85cb-efc58aed3421" Jan 31 03:50:59 crc kubenswrapper[4827]: I0131 03:50:59.712224 4827 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4273172d-ec24-4540-85cb-efc58aed3421" Jan 31 03:50:59 crc kubenswrapper[4827]: E0131 03:50:59.712641 4827 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:50:59 crc kubenswrapper[4827]: I0131 03:50:59.712643 4827 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:59 crc kubenswrapper[4827]: I0131 03:50:59.712999 4827 status_manager.go:851] "Failed to get status for pod" podUID="f1f631cd-2800-402c-9fe1-06af2bc620fd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:59 crc kubenswrapper[4827]: I0131 03:50:59.713501 4827 status_manager.go:851] "Failed to get status for pod" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" pod="openshift-marketplace/certified-operators-8rcw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8rcw5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:59 crc kubenswrapper[4827]: I0131 03:50:59.713940 4827 status_manager.go:851] "Failed to get status for pod" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" pod="openshift-marketplace/community-operators-wr7t4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wr7t4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:50:59 crc kubenswrapper[4827]: I0131 03:50:59.714236 4827 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 31 03:51:00 crc kubenswrapper[4827]: I0131 03:51:00.728342 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"17c4cbe98136ae78b0a0afb33896d80b3fa3eb5695fc9cd3e92e83d4735663a8"} Jan 31 03:51:00 crc kubenswrapper[4827]: I0131 03:51:00.729056 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"557cc4df68f61bb9d54fb1c388ed3f5e73dc9c9e4ccc6f312313f7c98da3a6e1"} Jan 31 03:51:00 crc kubenswrapper[4827]: I0131 03:51:00.729071 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5891cd8556d0812231d8d8951022e8edb3075613c8121071c48d249df2dd9ce7"} Jan 31 03:51:00 crc kubenswrapper[4827]: I0131 03:51:00.729081 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4b7198ebc5294dc604ed6352b822ffb4d40e9593429b0598e4b14c600aa3b590"} Jan 31 03:51:01 crc kubenswrapper[4827]: I0131 03:51:01.736929 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"784613a1ad6a3ca57e2954c1083cd8d43fb304af7b8ddabdbd8e69bfa67ac690"} Jan 31 03:51:01 crc kubenswrapper[4827]: I0131 03:51:01.737318 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:51:01 crc kubenswrapper[4827]: I0131 03:51:01.737634 4827 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4273172d-ec24-4540-85cb-efc58aed3421" Jan 31 03:51:01 crc kubenswrapper[4827]: I0131 03:51:01.737687 4827 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4273172d-ec24-4540-85cb-efc58aed3421" Jan 31 03:51:03 crc kubenswrapper[4827]: I0131 03:51:03.760470 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:51:03 crc kubenswrapper[4827]: I0131 03:51:03.875177 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" podUID="c10be0b3-7f40-4f17-8206-ab6257d4b23b" containerName="oauth-openshift" containerID="cri-o://7e7aaf694058c3d4697231686fea58a88f974be85b42c9d4c4d92c21a7da3f79" gracePeriod=15 Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.125695 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.125739 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.131872 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.416079 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.571210 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9wqf\" (UniqueName: \"kubernetes.io/projected/c10be0b3-7f40-4f17-8206-ab6257d4b23b-kube-api-access-b9wqf\") pod \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.571305 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-idp-0-file-data\") pod \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.571338 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-template-error\") pod \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.571371 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-cliconfig\") pod \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.571398 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-template-login\") pod \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.571439 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-router-certs\") pod \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.571473 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-ocp-branding-template\") pod \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.571499 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-serving-cert\") pod \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.571529 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-service-ca\") pod \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.572385 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c10be0b3-7f40-4f17-8206-ab6257d4b23b" (UID: "c10be0b3-7f40-4f17-8206-ab6257d4b23b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.572410 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c10be0b3-7f40-4f17-8206-ab6257d4b23b" (UID: "c10be0b3-7f40-4f17-8206-ab6257d4b23b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.572435 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-trusted-ca-bundle\") pod \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.572480 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-template-provider-selection\") pod \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.572516 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c10be0b3-7f40-4f17-8206-ab6257d4b23b-audit-dir\") pod \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.572599 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-audit-policies\") pod \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.572633 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-session\") pod \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\" (UID: \"c10be0b3-7f40-4f17-8206-ab6257d4b23b\") " Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.572800 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c10be0b3-7f40-4f17-8206-ab6257d4b23b-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c10be0b3-7f40-4f17-8206-ab6257d4b23b" (UID: "c10be0b3-7f40-4f17-8206-ab6257d4b23b"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.573092 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c10be0b3-7f40-4f17-8206-ab6257d4b23b" (UID: "c10be0b3-7f40-4f17-8206-ab6257d4b23b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.573520 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.573565 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.573589 4827 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c10be0b3-7f40-4f17-8206-ab6257d4b23b-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.573612 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.573624 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c10be0b3-7f40-4f17-8206-ab6257d4b23b" (UID: "c10be0b3-7f40-4f17-8206-ab6257d4b23b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.578311 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c10be0b3-7f40-4f17-8206-ab6257d4b23b" (UID: "c10be0b3-7f40-4f17-8206-ab6257d4b23b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.578785 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c10be0b3-7f40-4f17-8206-ab6257d4b23b" (UID: "c10be0b3-7f40-4f17-8206-ab6257d4b23b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.579731 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c10be0b3-7f40-4f17-8206-ab6257d4b23b" (UID: "c10be0b3-7f40-4f17-8206-ab6257d4b23b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.580768 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c10be0b3-7f40-4f17-8206-ab6257d4b23b" (UID: "c10be0b3-7f40-4f17-8206-ab6257d4b23b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.581642 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c10be0b3-7f40-4f17-8206-ab6257d4b23b-kube-api-access-b9wqf" (OuterVolumeSpecName: "kube-api-access-b9wqf") pod "c10be0b3-7f40-4f17-8206-ab6257d4b23b" (UID: "c10be0b3-7f40-4f17-8206-ab6257d4b23b"). InnerVolumeSpecName "kube-api-access-b9wqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.581765 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c10be0b3-7f40-4f17-8206-ab6257d4b23b" (UID: "c10be0b3-7f40-4f17-8206-ab6257d4b23b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.582251 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c10be0b3-7f40-4f17-8206-ab6257d4b23b" (UID: "c10be0b3-7f40-4f17-8206-ab6257d4b23b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.582424 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c10be0b3-7f40-4f17-8206-ab6257d4b23b" (UID: "c10be0b3-7f40-4f17-8206-ab6257d4b23b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.583679 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c10be0b3-7f40-4f17-8206-ab6257d4b23b" (UID: "c10be0b3-7f40-4f17-8206-ab6257d4b23b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.674813 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.674919 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.674952 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.674972 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.674991 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.675010 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.675027 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.675046 4827 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c10be0b3-7f40-4f17-8206-ab6257d4b23b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.675062 4827 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c10be0b3-7f40-4f17-8206-ab6257d4b23b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.675078 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9wqf\" (UniqueName: \"kubernetes.io/projected/c10be0b3-7f40-4f17-8206-ab6257d4b23b-kube-api-access-b9wqf\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.757470 4827 generic.go:334] "Generic (PLEG): container finished" podID="c10be0b3-7f40-4f17-8206-ab6257d4b23b" containerID="7e7aaf694058c3d4697231686fea58a88f974be85b42c9d4c4d92c21a7da3f79" exitCode=0 Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.757553 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" event={"ID":"c10be0b3-7f40-4f17-8206-ab6257d4b23b","Type":"ContainerDied","Data":"7e7aaf694058c3d4697231686fea58a88f974be85b42c9d4c4d92c21a7da3f79"} Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.757655 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" event={"ID":"c10be0b3-7f40-4f17-8206-ab6257d4b23b","Type":"ContainerDied","Data":"3376d9d9bcff952f3cad22fc14f566be4dd5ab2fdb4ea1c6c02f72a3294321f1"} Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.757686 4827 scope.go:117] "RemoveContainer" containerID="7e7aaf694058c3d4697231686fea58a88f974be85b42c9d4c4d92c21a7da3f79" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.757596 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qtqj4" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.778708 4827 scope.go:117] "RemoveContainer" containerID="7e7aaf694058c3d4697231686fea58a88f974be85b42c9d4c4d92c21a7da3f79" Jan 31 03:51:04 crc kubenswrapper[4827]: E0131 03:51:04.779206 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e7aaf694058c3d4697231686fea58a88f974be85b42c9d4c4d92c21a7da3f79\": container with ID starting with 7e7aaf694058c3d4697231686fea58a88f974be85b42c9d4c4d92c21a7da3f79 not found: ID does not exist" containerID="7e7aaf694058c3d4697231686fea58a88f974be85b42c9d4c4d92c21a7da3f79" Jan 31 03:51:04 crc kubenswrapper[4827]: I0131 03:51:04.779247 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e7aaf694058c3d4697231686fea58a88f974be85b42c9d4c4d92c21a7da3f79"} err="failed to get container status \"7e7aaf694058c3d4697231686fea58a88f974be85b42c9d4c4d92c21a7da3f79\": rpc error: code = NotFound desc = could not find container \"7e7aaf694058c3d4697231686fea58a88f974be85b42c9d4c4d92c21a7da3f79\": container with ID starting with 7e7aaf694058c3d4697231686fea58a88f974be85b42c9d4c4d92c21a7da3f79 not found: ID does not exist" Jan 31 03:51:06 crc kubenswrapper[4827]: I0131 03:51:06.748082 4827 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:51:06 crc kubenswrapper[4827]: I0131 03:51:06.768934 4827 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4273172d-ec24-4540-85cb-efc58aed3421" Jan 31 03:51:06 crc kubenswrapper[4827]: I0131 03:51:06.768966 4827 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4273172d-ec24-4540-85cb-efc58aed3421" Jan 31 03:51:06 crc kubenswrapper[4827]: I0131 03:51:06.773736 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:51:06 crc kubenswrapper[4827]: E0131 03:51:06.958642 4827 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 31 03:51:06 crc kubenswrapper[4827]: E0131 03:51:06.989817 4827 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-znhcc\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 31 03:51:07 crc kubenswrapper[4827]: I0131 03:51:07.773320 4827 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4273172d-ec24-4540-85cb-efc58aed3421" Jan 31 03:51:07 crc kubenswrapper[4827]: I0131 03:51:07.773352 4827 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4273172d-ec24-4540-85cb-efc58aed3421" Jan 31 03:51:08 crc kubenswrapper[4827]: I0131 03:51:08.131997 4827 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a153342e-baaa-4f79-98d8-9ad55ea6ef08" Jan 31 03:51:08 crc kubenswrapper[4827]: I0131 03:51:08.823870 4827 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 31 03:51:08 crc kubenswrapper[4827]: I0131 03:51:08.824028 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 31 03:51:17 crc kubenswrapper[4827]: I0131 03:51:17.078867 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 03:51:17 crc kubenswrapper[4827]: I0131 03:51:17.242078 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 03:51:17 crc kubenswrapper[4827]: I0131 03:51:17.414282 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 03:51:17 crc kubenswrapper[4827]: I0131 03:51:17.561544 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 03:51:17 crc kubenswrapper[4827]: I0131 03:51:17.588610 4827 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 03:51:17 crc kubenswrapper[4827]: I0131 03:51:17.862358 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 03:51:17 crc kubenswrapper[4827]: I0131 03:51:17.890126 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 03:51:17 crc kubenswrapper[4827]: I0131 03:51:17.956934 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 03:51:18 crc kubenswrapper[4827]: I0131 03:51:18.009398 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 03:51:18 crc kubenswrapper[4827]: I0131 03:51:18.117518 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 03:51:18 crc kubenswrapper[4827]: I0131 03:51:18.168088 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 03:51:18 crc kubenswrapper[4827]: I0131 03:51:18.264295 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 03:51:18 crc kubenswrapper[4827]: I0131 03:51:18.307773 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 03:51:18 crc kubenswrapper[4827]: I0131 03:51:18.464377 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 03:51:18 crc kubenswrapper[4827]: I0131 03:51:18.609688 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 03:51:18 crc kubenswrapper[4827]: I0131 03:51:18.827042 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:51:18 crc kubenswrapper[4827]: I0131 03:51:18.832977 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:51:18 crc kubenswrapper[4827]: I0131 03:51:18.871044 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 03:51:18 crc kubenswrapper[4827]: I0131 03:51:18.920400 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 03:51:18 crc kubenswrapper[4827]: I0131 03:51:18.933969 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 03:51:19 crc kubenswrapper[4827]: I0131 03:51:19.032949 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 03:51:19 crc kubenswrapper[4827]: I0131 03:51:19.114862 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 03:51:19 crc kubenswrapper[4827]: I0131 03:51:19.155802 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 03:51:19 crc kubenswrapper[4827]: I0131 03:51:19.251227 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 03:51:19 crc kubenswrapper[4827]: I0131 03:51:19.321333 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 03:51:19 crc kubenswrapper[4827]: I0131 03:51:19.397188 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 03:51:19 crc kubenswrapper[4827]: I0131 03:51:19.637762 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 03:51:19 crc kubenswrapper[4827]: I0131 03:51:19.652972 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 03:51:19 crc kubenswrapper[4827]: I0131 03:51:19.668319 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 03:51:19 crc kubenswrapper[4827]: I0131 03:51:19.680109 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 03:51:19 crc kubenswrapper[4827]: I0131 03:51:19.765766 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 03:51:19 crc kubenswrapper[4827]: I0131 03:51:19.874180 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.019785 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.102809 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.147528 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.158648 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.162283 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.180693 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.266729 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.314410 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.335977 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.379316 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.402365 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.420500 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.534351 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.596083 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.614038 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.619242 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.619281 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.646546 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.648460 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.722582 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.777802 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.839428 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.897702 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 03:51:20 crc kubenswrapper[4827]: I0131 03:51:20.993420 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 03:51:21 crc kubenswrapper[4827]: I0131 03:51:21.023429 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 03:51:21 crc kubenswrapper[4827]: I0131 03:51:21.138692 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 03:51:21 crc kubenswrapper[4827]: I0131 03:51:21.143537 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 03:51:21 crc kubenswrapper[4827]: I0131 03:51:21.251970 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 03:51:21 crc kubenswrapper[4827]: I0131 03:51:21.262665 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 03:51:21 crc kubenswrapper[4827]: I0131 03:51:21.309653 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 03:51:21 crc kubenswrapper[4827]: I0131 03:51:21.333050 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 03:51:21 crc kubenswrapper[4827]: I0131 03:51:21.428724 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 03:51:21 crc kubenswrapper[4827]: I0131 03:51:21.437037 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 03:51:21 crc kubenswrapper[4827]: I0131 03:51:21.444766 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 03:51:21 crc kubenswrapper[4827]: I0131 03:51:21.555804 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 03:51:21 crc kubenswrapper[4827]: I0131 03:51:21.561383 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 03:51:21 crc kubenswrapper[4827]: I0131 03:51:21.710166 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 03:51:21 crc kubenswrapper[4827]: I0131 03:51:21.753769 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 03:51:21 crc kubenswrapper[4827]: I0131 03:51:21.768056 4827 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 03:51:21 crc kubenswrapper[4827]: I0131 03:51:21.775334 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 03:51:21 crc kubenswrapper[4827]: I0131 03:51:21.785532 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 03:51:21 crc kubenswrapper[4827]: I0131 03:51:21.862114 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 03:51:21 crc kubenswrapper[4827]: I0131 03:51:21.893934 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 03:51:21 crc kubenswrapper[4827]: I0131 03:51:21.946726 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 03:51:21 crc kubenswrapper[4827]: I0131 03:51:21.955720 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 03:51:22 crc kubenswrapper[4827]: I0131 03:51:22.091080 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 03:51:22 crc kubenswrapper[4827]: I0131 03:51:22.184114 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 03:51:22 crc kubenswrapper[4827]: I0131 03:51:22.256498 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 03:51:22 crc kubenswrapper[4827]: I0131 03:51:22.443263 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 03:51:22 crc kubenswrapper[4827]: I0131 03:51:22.461444 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 03:51:22 crc kubenswrapper[4827]: I0131 03:51:22.570509 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 03:51:22 crc kubenswrapper[4827]: I0131 03:51:22.575071 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 03:51:22 crc kubenswrapper[4827]: I0131 03:51:22.613655 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 03:51:22 crc kubenswrapper[4827]: I0131 03:51:22.756294 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 03:51:22 crc kubenswrapper[4827]: I0131 03:51:22.811629 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 03:51:22 crc kubenswrapper[4827]: I0131 03:51:22.843267 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 03:51:22 crc kubenswrapper[4827]: I0131 03:51:22.959294 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 03:51:22 crc kubenswrapper[4827]: I0131 03:51:22.975556 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 03:51:22 crc kubenswrapper[4827]: I0131 03:51:22.977020 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 03:51:23 crc kubenswrapper[4827]: I0131 03:51:23.007046 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 03:51:23 crc kubenswrapper[4827]: I0131 03:51:23.120375 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 03:51:23 crc kubenswrapper[4827]: I0131 03:51:23.120397 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 03:51:23 crc kubenswrapper[4827]: I0131 03:51:23.146676 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 03:51:23 crc kubenswrapper[4827]: I0131 03:51:23.156817 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 03:51:23 crc kubenswrapper[4827]: I0131 03:51:23.174568 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 03:51:23 crc kubenswrapper[4827]: I0131 03:51:23.216370 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 03:51:23 crc kubenswrapper[4827]: I0131 03:51:23.221069 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 03:51:23 crc kubenswrapper[4827]: I0131 03:51:23.366106 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 03:51:23 crc kubenswrapper[4827]: I0131 03:51:23.405275 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 03:51:23 crc kubenswrapper[4827]: I0131 03:51:23.479410 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 03:51:23 crc kubenswrapper[4827]: I0131 03:51:23.640446 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 03:51:23 crc kubenswrapper[4827]: I0131 03:51:23.701821 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 03:51:23 crc kubenswrapper[4827]: I0131 03:51:23.743440 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 03:51:23 crc kubenswrapper[4827]: I0131 03:51:23.801534 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 03:51:23 crc kubenswrapper[4827]: I0131 03:51:23.838731 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 03:51:24 crc kubenswrapper[4827]: I0131 03:51:24.062942 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 03:51:24 crc kubenswrapper[4827]: I0131 03:51:24.147695 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 03:51:24 crc kubenswrapper[4827]: I0131 03:51:24.215373 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 03:51:24 crc kubenswrapper[4827]: I0131 03:51:24.224721 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 03:51:24 crc kubenswrapper[4827]: I0131 03:51:24.336354 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 03:51:24 crc kubenswrapper[4827]: I0131 03:51:24.338726 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 03:51:24 crc kubenswrapper[4827]: I0131 03:51:24.372983 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 03:51:24 crc kubenswrapper[4827]: I0131 03:51:24.508514 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 03:51:24 crc kubenswrapper[4827]: I0131 03:51:24.602847 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 03:51:24 crc kubenswrapper[4827]: I0131 03:51:24.605082 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 03:51:24 crc kubenswrapper[4827]: I0131 03:51:24.605946 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 03:51:24 crc kubenswrapper[4827]: I0131 03:51:24.666728 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 03:51:24 crc kubenswrapper[4827]: I0131 03:51:24.727397 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 03:51:24 crc kubenswrapper[4827]: I0131 03:51:24.736220 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 03:51:24 crc kubenswrapper[4827]: I0131 03:51:24.754275 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 03:51:24 crc kubenswrapper[4827]: I0131 03:51:24.764800 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 03:51:24 crc kubenswrapper[4827]: I0131 03:51:24.792119 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 03:51:24 crc kubenswrapper[4827]: I0131 03:51:24.996376 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 03:51:25 crc kubenswrapper[4827]: I0131 03:51:25.088667 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 03:51:25 crc kubenswrapper[4827]: I0131 03:51:25.164323 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 03:51:25 crc kubenswrapper[4827]: I0131 03:51:25.234102 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 03:51:25 crc kubenswrapper[4827]: I0131 03:51:25.241233 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 03:51:25 crc kubenswrapper[4827]: I0131 03:51:25.242590 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 03:51:25 crc kubenswrapper[4827]: I0131 03:51:25.258340 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 03:51:25 crc kubenswrapper[4827]: I0131 03:51:25.464815 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 03:51:25 crc kubenswrapper[4827]: I0131 03:51:25.468525 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 03:51:25 crc kubenswrapper[4827]: I0131 03:51:25.640896 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 03:51:25 crc kubenswrapper[4827]: I0131 03:51:25.819730 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 03:51:25 crc kubenswrapper[4827]: I0131 03:51:25.830714 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 03:51:25 crc kubenswrapper[4827]: I0131 03:51:25.850893 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 03:51:25 crc kubenswrapper[4827]: I0131 03:51:25.896053 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 03:51:25 crc kubenswrapper[4827]: I0131 03:51:25.932993 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 03:51:25 crc kubenswrapper[4827]: I0131 03:51:25.946639 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 03:51:26 crc kubenswrapper[4827]: I0131 03:51:26.068431 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 03:51:26 crc kubenswrapper[4827]: I0131 03:51:26.113556 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 03:51:26 crc kubenswrapper[4827]: I0131 03:51:26.113643 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 03:51:26 crc kubenswrapper[4827]: I0131 03:51:26.117752 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 03:51:26 crc kubenswrapper[4827]: I0131 03:51:26.220412 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 03:51:26 crc kubenswrapper[4827]: I0131 03:51:26.223187 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 03:51:26 crc kubenswrapper[4827]: I0131 03:51:26.300940 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 03:51:26 crc kubenswrapper[4827]: I0131 03:51:26.459203 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 03:51:26 crc kubenswrapper[4827]: I0131 03:51:26.476035 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 03:51:26 crc kubenswrapper[4827]: I0131 03:51:26.495404 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 03:51:26 crc kubenswrapper[4827]: I0131 03:51:26.529710 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 03:51:26 crc kubenswrapper[4827]: I0131 03:51:26.544494 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 03:51:26 crc kubenswrapper[4827]: I0131 03:51:26.564636 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 03:51:26 crc kubenswrapper[4827]: I0131 03:51:26.633549 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 03:51:26 crc kubenswrapper[4827]: I0131 03:51:26.933602 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 03:51:26 crc kubenswrapper[4827]: I0131 03:51:26.935655 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 03:51:26 crc kubenswrapper[4827]: I0131 03:51:26.997170 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 03:51:27 crc kubenswrapper[4827]: I0131 03:51:27.151179 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 03:51:27 crc kubenswrapper[4827]: I0131 03:51:27.314730 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 03:51:27 crc kubenswrapper[4827]: I0131 03:51:27.409335 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 03:51:27 crc kubenswrapper[4827]: I0131 03:51:27.446383 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 03:51:27 crc kubenswrapper[4827]: I0131 03:51:27.624388 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 03:51:27 crc kubenswrapper[4827]: I0131 03:51:27.632316 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 03:51:27 crc kubenswrapper[4827]: I0131 03:51:27.648008 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 03:51:27 crc kubenswrapper[4827]: I0131 03:51:27.652923 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 03:51:27 crc kubenswrapper[4827]: I0131 03:51:27.704056 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 03:51:27 crc kubenswrapper[4827]: I0131 03:51:27.718025 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 03:51:27 crc kubenswrapper[4827]: I0131 03:51:27.725656 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 03:51:27 crc kubenswrapper[4827]: I0131 03:51:27.782760 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 03:51:27 crc kubenswrapper[4827]: I0131 03:51:27.914587 4827 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.068073 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.090361 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.107636 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.157917 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.176175 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.271381 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.371892 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.375505 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.380096 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.432867 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.489153 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.516267 4827 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.528379 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.530373 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.531752 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=44.531722284 podStartE2EDuration="44.531722284s" podCreationTimestamp="2026-01-31 03:50:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:51:06.828739222 +0000 UTC m=+259.515819661" watchObservedRunningTime="2026-01-31 03:51:28.531722284 +0000 UTC m=+281.218802733" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.533721 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/certified-operators-8rcw5","openshift-authentication/oauth-openshift-558db77b4-qtqj4","openshift-marketplace/community-operators-wr7t4"] Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.533805 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-65c4659bd4-hbqmb","openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 03:51:28 crc kubenswrapper[4827]: E0131 03:51:28.534191 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10be0b3-7f40-4f17-8206-ab6257d4b23b" containerName="oauth-openshift" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.534212 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10be0b3-7f40-4f17-8206-ab6257d4b23b" containerName="oauth-openshift" Jan 31 03:51:28 crc kubenswrapper[4827]: E0131 03:51:28.534244 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f631cd-2800-402c-9fe1-06af2bc620fd" containerName="installer" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.534253 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f631cd-2800-402c-9fe1-06af2bc620fd" containerName="installer" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.534420 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f631cd-2800-402c-9fe1-06af2bc620fd" containerName="installer" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.534446 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10be0b3-7f40-4f17-8206-ab6257d4b23b" containerName="oauth-openshift" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.534808 4827 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4273172d-ec24-4540-85cb-efc58aed3421" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.534857 4827 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4273172d-ec24-4540-85cb-efc58aed3421" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.535460 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.539524 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.539667 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.541399 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.542137 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.542326 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.542937 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.542975 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.543298 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.543513 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.545286 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.545327 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.545531 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.545565 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.546080 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.562831 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.568793 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.571914 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.572937 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.572915593 podStartE2EDuration="22.572915593s" podCreationTimestamp="2026-01-31 03:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:51:28.570710651 +0000 UTC m=+281.257791150" watchObservedRunningTime="2026-01-31 03:51:28.572915593 +0000 UTC m=+281.259996042" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.587520 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.599094 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.682352 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.712718 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-user-template-error\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.712786 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-system-session\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.712830 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/43f71945-8b9c-4ecf-928b-82790e1a920a-audit-dir\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.712867 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-system-service-ca\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.713133 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.713309 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.713345 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.713364 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.713465 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.713549 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.713601 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-user-template-login\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.713645 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/43f71945-8b9c-4ecf-928b-82790e1a920a-audit-policies\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.713682 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-system-router-certs\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.713710 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlc42\" (UniqueName: \"kubernetes.io/projected/43f71945-8b9c-4ecf-928b-82790e1a920a-kube-api-access-qlc42\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.778461 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.814957 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.815055 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-user-template-login\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.815113 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/43f71945-8b9c-4ecf-928b-82790e1a920a-audit-policies\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.815152 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-system-router-certs\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.815189 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlc42\" (UniqueName: \"kubernetes.io/projected/43f71945-8b9c-4ecf-928b-82790e1a920a-kube-api-access-qlc42\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.815239 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-user-template-error\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.815278 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-system-session\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.815319 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/43f71945-8b9c-4ecf-928b-82790e1a920a-audit-dir\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.815350 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-system-service-ca\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.815404 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.815451 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.815485 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.815546 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.815621 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.815770 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/43f71945-8b9c-4ecf-928b-82790e1a920a-audit-dir\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.817490 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.818095 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/43f71945-8b9c-4ecf-928b-82790e1a920a-audit-policies\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.818419 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.824481 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-system-service-ca\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.827575 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.832211 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-system-router-certs\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.832860 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.833140 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.833450 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.834600 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-user-template-login\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.838163 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-system-session\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.841384 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/43f71945-8b9c-4ecf-928b-82790e1a920a-v4-0-config-user-template-error\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.842070 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.843294 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlc42\" (UniqueName: \"kubernetes.io/projected/43f71945-8b9c-4ecf-928b-82790e1a920a-kube-api-access-qlc42\") pod \"oauth-openshift-65c4659bd4-hbqmb\" (UID: \"43f71945-8b9c-4ecf-928b-82790e1a920a\") " pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:28 crc kubenswrapper[4827]: I0131 03:51:28.867727 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.072429 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.085049 4827 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.085416 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://88640011fd65502204683d77938b8930922541ccaacc8cbd7bc3e72ebcc58ee0" gracePeriod=5 Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.108913 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.178682 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.186866 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.201152 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.252250 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.253926 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.321387 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.367135 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-65c4659bd4-hbqmb"] Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.406282 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.410679 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.434556 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.503744 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.504358 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.599253 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.787326 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.804588 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.821034 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.876000 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.913715 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" event={"ID":"43f71945-8b9c-4ecf-928b-82790e1a920a","Type":"ContainerStarted","Data":"c656f9f33b74ab3b2d59325445ea5597a8f21d435744af09cada04dcfe5ce934"} Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.913762 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" event={"ID":"43f71945-8b9c-4ecf-928b-82790e1a920a","Type":"ContainerStarted","Data":"994012adcfed522173643db723acd3104a1e7d6bb6230b1ee8223de786890b64"} Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.914784 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.937229 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" podStartSLOduration=51.937207953 podStartE2EDuration="51.937207953s" podCreationTimestamp="2026-01-31 03:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:51:29.934215655 +0000 UTC m=+282.621296114" watchObservedRunningTime="2026-01-31 03:51:29.937207953 +0000 UTC m=+282.624288402" Jan 31 03:51:29 crc kubenswrapper[4827]: I0131 03:51:29.998057 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 03:51:30 crc kubenswrapper[4827]: I0131 03:51:30.076832 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 03:51:30 crc kubenswrapper[4827]: I0131 03:51:30.143665 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="062f8208-e13f-439f-bb1f-13b9c91c5ea3" path="/var/lib/kubelet/pods/062f8208-e13f-439f-bb1f-13b9c91c5ea3/volumes" Jan 31 03:51:30 crc kubenswrapper[4827]: I0131 03:51:30.144501 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="273683f4-0b94-44d7-83a2-b540f4d5d81d" path="/var/lib/kubelet/pods/273683f4-0b94-44d7-83a2-b540f4d5d81d/volumes" Jan 31 03:51:30 crc kubenswrapper[4827]: I0131 03:51:30.145148 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c10be0b3-7f40-4f17-8206-ab6257d4b23b" path="/var/lib/kubelet/pods/c10be0b3-7f40-4f17-8206-ab6257d4b23b/volumes" Jan 31 03:51:30 crc kubenswrapper[4827]: I0131 03:51:30.186634 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 03:51:30 crc kubenswrapper[4827]: I0131 03:51:30.221349 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 03:51:30 crc kubenswrapper[4827]: I0131 03:51:30.236606 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 03:51:30 crc kubenswrapper[4827]: I0131 03:51:30.256559 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-65c4659bd4-hbqmb" Jan 31 03:51:30 crc kubenswrapper[4827]: I0131 03:51:30.285281 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 03:51:30 crc kubenswrapper[4827]: I0131 03:51:30.295250 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 03:51:30 crc kubenswrapper[4827]: I0131 03:51:30.380875 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 03:51:30 crc kubenswrapper[4827]: I0131 03:51:30.460079 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 03:51:30 crc kubenswrapper[4827]: I0131 03:51:30.590151 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 03:51:30 crc kubenswrapper[4827]: I0131 03:51:30.717947 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 03:51:30 crc kubenswrapper[4827]: I0131 03:51:30.840679 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 03:51:30 crc kubenswrapper[4827]: I0131 03:51:30.892790 4827 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 03:51:31 crc kubenswrapper[4827]: I0131 03:51:31.004089 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 03:51:31 crc kubenswrapper[4827]: I0131 03:51:31.026401 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 03:51:31 crc kubenswrapper[4827]: I0131 03:51:31.071249 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 03:51:31 crc kubenswrapper[4827]: I0131 03:51:31.083574 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 03:51:31 crc kubenswrapper[4827]: I0131 03:51:31.096126 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 03:51:31 crc kubenswrapper[4827]: I0131 03:51:31.134596 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 03:51:31 crc kubenswrapper[4827]: I0131 03:51:31.219384 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 03:51:31 crc kubenswrapper[4827]: I0131 03:51:31.416161 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 03:51:31 crc kubenswrapper[4827]: I0131 03:51:31.525443 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 03:51:31 crc kubenswrapper[4827]: I0131 03:51:31.620919 4827 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 03:51:31 crc kubenswrapper[4827]: I0131 03:51:31.624630 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 03:51:31 crc kubenswrapper[4827]: I0131 03:51:31.693352 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 03:51:32 crc kubenswrapper[4827]: I0131 03:51:32.038327 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 03:51:32 crc kubenswrapper[4827]: I0131 03:51:32.159990 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 03:51:32 crc kubenswrapper[4827]: I0131 03:51:32.161642 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 03:51:32 crc kubenswrapper[4827]: I0131 03:51:32.304435 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 03:51:32 crc kubenswrapper[4827]: I0131 03:51:32.768388 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 03:51:33 crc kubenswrapper[4827]: I0131 03:51:33.733255 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.699072 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.699924 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.823198 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.823352 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.823382 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.823468 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.823505 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.823545 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.823660 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.823706 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.823775 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.824472 4827 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.824497 4827 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.824515 4827 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.824533 4827 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.837809 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.926212 4827 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.951343 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.951405 4827 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="88640011fd65502204683d77938b8930922541ccaacc8cbd7bc3e72ebcc58ee0" exitCode=137 Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.951455 4827 scope.go:117] "RemoveContainer" containerID="88640011fd65502204683d77938b8930922541ccaacc8cbd7bc3e72ebcc58ee0" Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.951587 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.992373 4827 scope.go:117] "RemoveContainer" containerID="88640011fd65502204683d77938b8930922541ccaacc8cbd7bc3e72ebcc58ee0" Jan 31 03:51:34 crc kubenswrapper[4827]: E0131 03:51:34.993251 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88640011fd65502204683d77938b8930922541ccaacc8cbd7bc3e72ebcc58ee0\": container with ID starting with 88640011fd65502204683d77938b8930922541ccaacc8cbd7bc3e72ebcc58ee0 not found: ID does not exist" containerID="88640011fd65502204683d77938b8930922541ccaacc8cbd7bc3e72ebcc58ee0" Jan 31 03:51:34 crc kubenswrapper[4827]: I0131 03:51:34.993349 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88640011fd65502204683d77938b8930922541ccaacc8cbd7bc3e72ebcc58ee0"} err="failed to get container status \"88640011fd65502204683d77938b8930922541ccaacc8cbd7bc3e72ebcc58ee0\": rpc error: code = NotFound desc = could not find container \"88640011fd65502204683d77938b8930922541ccaacc8cbd7bc3e72ebcc58ee0\": container with ID starting with 88640011fd65502204683d77938b8930922541ccaacc8cbd7bc3e72ebcc58ee0 not found: ID does not exist" Jan 31 03:51:36 crc kubenswrapper[4827]: I0131 03:51:36.120151 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 31 03:51:36 crc kubenswrapper[4827]: I0131 03:51:36.120551 4827 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 31 03:51:36 crc kubenswrapper[4827]: I0131 03:51:36.132790 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 03:51:36 crc kubenswrapper[4827]: I0131 03:51:36.132820 4827 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="79d0d9ab-18c8-43b0-93f4-356acadfdef5" Jan 31 03:51:36 crc kubenswrapper[4827]: I0131 03:51:36.137330 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 03:51:36 crc kubenswrapper[4827]: I0131 03:51:36.137377 4827 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="79d0d9ab-18c8-43b0-93f4-356acadfdef5" Jan 31 03:51:47 crc kubenswrapper[4827]: I0131 03:51:47.874018 4827 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 31 03:51:50 crc kubenswrapper[4827]: I0131 03:51:50.059522 4827 generic.go:334] "Generic (PLEG): container finished" podID="cc0facf8-c192-4df4-bb9b-68f123fd7b21" containerID="e293ec72b63f1393cfb46546ee0e94f82827ea624e218e5e1cdf8e73c91516d7" exitCode=0 Jan 31 03:51:50 crc kubenswrapper[4827]: I0131 03:51:50.059612 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" event={"ID":"cc0facf8-c192-4df4-bb9b-68f123fd7b21","Type":"ContainerDied","Data":"e293ec72b63f1393cfb46546ee0e94f82827ea624e218e5e1cdf8e73c91516d7"} Jan 31 03:51:50 crc kubenswrapper[4827]: I0131 03:51:50.060801 4827 scope.go:117] "RemoveContainer" containerID="e293ec72b63f1393cfb46546ee0e94f82827ea624e218e5e1cdf8e73c91516d7" Jan 31 03:51:50 crc kubenswrapper[4827]: I0131 03:51:50.571248 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" Jan 31 03:51:50 crc kubenswrapper[4827]: I0131 03:51:50.571547 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" Jan 31 03:51:51 crc kubenswrapper[4827]: I0131 03:51:51.068960 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" event={"ID":"cc0facf8-c192-4df4-bb9b-68f123fd7b21","Type":"ContainerStarted","Data":"d740f1a78f7ff99bed83300155a58db8babf7a532fed3fc5bd7a9d2d7a117f0b"} Jan 31 03:51:51 crc kubenswrapper[4827]: I0131 03:51:51.069424 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" Jan 31 03:51:51 crc kubenswrapper[4827]: I0131 03:51:51.073254 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" Jan 31 03:52:35 crc kubenswrapper[4827]: I0131 03:52:35.166378 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4gn2"] Jan 31 03:52:35 crc kubenswrapper[4827]: I0131 03:52:35.167451 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r4gn2" podUID="94e2d804-29e9-4233-adda-45072b493f0f" containerName="registry-server" containerID="cri-o://0afc7bfb70922bc887e5ff833e67551cdd2857a274039c09cc505c582132cb2e" gracePeriod=2 Jan 31 03:52:35 crc kubenswrapper[4827]: I0131 03:52:35.394737 4827 generic.go:334] "Generic (PLEG): container finished" podID="94e2d804-29e9-4233-adda-45072b493f0f" containerID="0afc7bfb70922bc887e5ff833e67551cdd2857a274039c09cc505c582132cb2e" exitCode=0 Jan 31 03:52:35 crc kubenswrapper[4827]: I0131 03:52:35.395011 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4gn2" event={"ID":"94e2d804-29e9-4233-adda-45072b493f0f","Type":"ContainerDied","Data":"0afc7bfb70922bc887e5ff833e67551cdd2857a274039c09cc505c582132cb2e"} Jan 31 03:52:35 crc kubenswrapper[4827]: I0131 03:52:35.593092 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4gn2" Jan 31 03:52:35 crc kubenswrapper[4827]: I0131 03:52:35.668426 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94e2d804-29e9-4233-adda-45072b493f0f-utilities\") pod \"94e2d804-29e9-4233-adda-45072b493f0f\" (UID: \"94e2d804-29e9-4233-adda-45072b493f0f\") " Jan 31 03:52:35 crc kubenswrapper[4827]: I0131 03:52:35.668523 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87x6p\" (UniqueName: \"kubernetes.io/projected/94e2d804-29e9-4233-adda-45072b493f0f-kube-api-access-87x6p\") pod \"94e2d804-29e9-4233-adda-45072b493f0f\" (UID: \"94e2d804-29e9-4233-adda-45072b493f0f\") " Jan 31 03:52:35 crc kubenswrapper[4827]: I0131 03:52:35.668760 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94e2d804-29e9-4233-adda-45072b493f0f-catalog-content\") pod \"94e2d804-29e9-4233-adda-45072b493f0f\" (UID: \"94e2d804-29e9-4233-adda-45072b493f0f\") " Jan 31 03:52:35 crc kubenswrapper[4827]: I0131 03:52:35.669631 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94e2d804-29e9-4233-adda-45072b493f0f-utilities" (OuterVolumeSpecName: "utilities") pod "94e2d804-29e9-4233-adda-45072b493f0f" (UID: "94e2d804-29e9-4233-adda-45072b493f0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:52:35 crc kubenswrapper[4827]: I0131 03:52:35.680967 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94e2d804-29e9-4233-adda-45072b493f0f-kube-api-access-87x6p" (OuterVolumeSpecName: "kube-api-access-87x6p") pod "94e2d804-29e9-4233-adda-45072b493f0f" (UID: "94e2d804-29e9-4233-adda-45072b493f0f"). InnerVolumeSpecName "kube-api-access-87x6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:52:35 crc kubenswrapper[4827]: I0131 03:52:35.690123 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94e2d804-29e9-4233-adda-45072b493f0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94e2d804-29e9-4233-adda-45072b493f0f" (UID: "94e2d804-29e9-4233-adda-45072b493f0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:52:35 crc kubenswrapper[4827]: I0131 03:52:35.770411 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94e2d804-29e9-4233-adda-45072b493f0f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:35 crc kubenswrapper[4827]: I0131 03:52:35.770451 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87x6p\" (UniqueName: \"kubernetes.io/projected/94e2d804-29e9-4233-adda-45072b493f0f-kube-api-access-87x6p\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:35 crc kubenswrapper[4827]: I0131 03:52:35.770466 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94e2d804-29e9-4233-adda-45072b493f0f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:36 crc kubenswrapper[4827]: I0131 03:52:36.402110 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4gn2" event={"ID":"94e2d804-29e9-4233-adda-45072b493f0f","Type":"ContainerDied","Data":"b01839ac4012411ed14660660a512b735d03152b7aa276dafcdcb036a45cc2f4"} Jan 31 03:52:36 crc kubenswrapper[4827]: I0131 03:52:36.402579 4827 scope.go:117] "RemoveContainer" containerID="0afc7bfb70922bc887e5ff833e67551cdd2857a274039c09cc505c582132cb2e" Jan 31 03:52:36 crc kubenswrapper[4827]: I0131 03:52:36.402132 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4gn2" Jan 31 03:52:36 crc kubenswrapper[4827]: I0131 03:52:36.429228 4827 scope.go:117] "RemoveContainer" containerID="2b5fdd748f53650685095620fb5c0df3695163fd333e3b17d94b67fc65c34f03" Jan 31 03:52:36 crc kubenswrapper[4827]: I0131 03:52:36.432542 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4gn2"] Jan 31 03:52:36 crc kubenswrapper[4827]: I0131 03:52:36.438362 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4gn2"] Jan 31 03:52:36 crc kubenswrapper[4827]: I0131 03:52:36.445243 4827 scope.go:117] "RemoveContainer" containerID="eef2916ca929e315120ff7eb110a19d7c9532941368919484ac1b546a4be9386" Jan 31 03:52:37 crc kubenswrapper[4827]: I0131 03:52:37.366792 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c2d85"] Jan 31 03:52:37 crc kubenswrapper[4827]: I0131 03:52:37.367121 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c2d85" podUID="f3ede25d-6d79-44f7-a853-88b36723eb92" containerName="registry-server" containerID="cri-o://882f692fcc4ed250c2f46e0c56ee5f94d7d10ee2e2eeed469ca7d76f287de9a9" gracePeriod=2 Jan 31 03:52:37 crc kubenswrapper[4827]: I0131 03:52:37.708693 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c2d85" Jan 31 03:52:37 crc kubenswrapper[4827]: I0131 03:52:37.808155 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3ede25d-6d79-44f7-a853-88b36723eb92-catalog-content\") pod \"f3ede25d-6d79-44f7-a853-88b36723eb92\" (UID: \"f3ede25d-6d79-44f7-a853-88b36723eb92\") " Jan 31 03:52:37 crc kubenswrapper[4827]: I0131 03:52:37.808244 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3ede25d-6d79-44f7-a853-88b36723eb92-utilities\") pod \"f3ede25d-6d79-44f7-a853-88b36723eb92\" (UID: \"f3ede25d-6d79-44f7-a853-88b36723eb92\") " Jan 31 03:52:37 crc kubenswrapper[4827]: I0131 03:52:37.808336 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltnf6\" (UniqueName: \"kubernetes.io/projected/f3ede25d-6d79-44f7-a853-88b36723eb92-kube-api-access-ltnf6\") pod \"f3ede25d-6d79-44f7-a853-88b36723eb92\" (UID: \"f3ede25d-6d79-44f7-a853-88b36723eb92\") " Jan 31 03:52:37 crc kubenswrapper[4827]: I0131 03:52:37.809456 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3ede25d-6d79-44f7-a853-88b36723eb92-utilities" (OuterVolumeSpecName: "utilities") pod "f3ede25d-6d79-44f7-a853-88b36723eb92" (UID: "f3ede25d-6d79-44f7-a853-88b36723eb92"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:52:37 crc kubenswrapper[4827]: I0131 03:52:37.816228 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3ede25d-6d79-44f7-a853-88b36723eb92-kube-api-access-ltnf6" (OuterVolumeSpecName: "kube-api-access-ltnf6") pod "f3ede25d-6d79-44f7-a853-88b36723eb92" (UID: "f3ede25d-6d79-44f7-a853-88b36723eb92"). InnerVolumeSpecName "kube-api-access-ltnf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:52:37 crc kubenswrapper[4827]: I0131 03:52:37.910047 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3ede25d-6d79-44f7-a853-88b36723eb92-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:37 crc kubenswrapper[4827]: I0131 03:52:37.910086 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltnf6\" (UniqueName: \"kubernetes.io/projected/f3ede25d-6d79-44f7-a853-88b36723eb92-kube-api-access-ltnf6\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:37 crc kubenswrapper[4827]: I0131 03:52:37.943737 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3ede25d-6d79-44f7-a853-88b36723eb92-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3ede25d-6d79-44f7-a853-88b36723eb92" (UID: "f3ede25d-6d79-44f7-a853-88b36723eb92"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:52:38 crc kubenswrapper[4827]: I0131 03:52:38.012397 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3ede25d-6d79-44f7-a853-88b36723eb92-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:38 crc kubenswrapper[4827]: I0131 03:52:38.119157 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94e2d804-29e9-4233-adda-45072b493f0f" path="/var/lib/kubelet/pods/94e2d804-29e9-4233-adda-45072b493f0f/volumes" Jan 31 03:52:38 crc kubenswrapper[4827]: I0131 03:52:38.421515 4827 generic.go:334] "Generic (PLEG): container finished" podID="f3ede25d-6d79-44f7-a853-88b36723eb92" containerID="882f692fcc4ed250c2f46e0c56ee5f94d7d10ee2e2eeed469ca7d76f287de9a9" exitCode=0 Jan 31 03:52:38 crc kubenswrapper[4827]: I0131 03:52:38.421590 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2d85" event={"ID":"f3ede25d-6d79-44f7-a853-88b36723eb92","Type":"ContainerDied","Data":"882f692fcc4ed250c2f46e0c56ee5f94d7d10ee2e2eeed469ca7d76f287de9a9"} Jan 31 03:52:38 crc kubenswrapper[4827]: I0131 03:52:38.421622 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c2d85" Jan 31 03:52:38 crc kubenswrapper[4827]: I0131 03:52:38.421653 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2d85" event={"ID":"f3ede25d-6d79-44f7-a853-88b36723eb92","Type":"ContainerDied","Data":"4401e7f4ca659e7c4112aabef42d73d4cb523979d1a46f20c0a8dcb04098ab63"} Jan 31 03:52:38 crc kubenswrapper[4827]: I0131 03:52:38.421693 4827 scope.go:117] "RemoveContainer" containerID="882f692fcc4ed250c2f46e0c56ee5f94d7d10ee2e2eeed469ca7d76f287de9a9" Jan 31 03:52:38 crc kubenswrapper[4827]: I0131 03:52:38.444625 4827 scope.go:117] "RemoveContainer" containerID="fddc59cd6ede0c737d469f0c68ce2f50ac6fdeafa31937a002dad751ea2796ff" Jan 31 03:52:38 crc kubenswrapper[4827]: I0131 03:52:38.447674 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c2d85"] Jan 31 03:52:38 crc kubenswrapper[4827]: I0131 03:52:38.449985 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c2d85"] Jan 31 03:52:38 crc kubenswrapper[4827]: I0131 03:52:38.461359 4827 scope.go:117] "RemoveContainer" containerID="c25518e1489adb33ab90a0334fb1369506341518ad3a608423bf26201789e099" Jan 31 03:52:38 crc kubenswrapper[4827]: I0131 03:52:38.481961 4827 scope.go:117] "RemoveContainer" containerID="882f692fcc4ed250c2f46e0c56ee5f94d7d10ee2e2eeed469ca7d76f287de9a9" Jan 31 03:52:38 crc kubenswrapper[4827]: E0131 03:52:38.482574 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"882f692fcc4ed250c2f46e0c56ee5f94d7d10ee2e2eeed469ca7d76f287de9a9\": container with ID starting with 882f692fcc4ed250c2f46e0c56ee5f94d7d10ee2e2eeed469ca7d76f287de9a9 not found: ID does not exist" containerID="882f692fcc4ed250c2f46e0c56ee5f94d7d10ee2e2eeed469ca7d76f287de9a9" Jan 31 03:52:38 crc kubenswrapper[4827]: I0131 03:52:38.482632 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"882f692fcc4ed250c2f46e0c56ee5f94d7d10ee2e2eeed469ca7d76f287de9a9"} err="failed to get container status \"882f692fcc4ed250c2f46e0c56ee5f94d7d10ee2e2eeed469ca7d76f287de9a9\": rpc error: code = NotFound desc = could not find container \"882f692fcc4ed250c2f46e0c56ee5f94d7d10ee2e2eeed469ca7d76f287de9a9\": container with ID starting with 882f692fcc4ed250c2f46e0c56ee5f94d7d10ee2e2eeed469ca7d76f287de9a9 not found: ID does not exist" Jan 31 03:52:38 crc kubenswrapper[4827]: I0131 03:52:38.482660 4827 scope.go:117] "RemoveContainer" containerID="fddc59cd6ede0c737d469f0c68ce2f50ac6fdeafa31937a002dad751ea2796ff" Jan 31 03:52:38 crc kubenswrapper[4827]: E0131 03:52:38.483018 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fddc59cd6ede0c737d469f0c68ce2f50ac6fdeafa31937a002dad751ea2796ff\": container with ID starting with fddc59cd6ede0c737d469f0c68ce2f50ac6fdeafa31937a002dad751ea2796ff not found: ID does not exist" containerID="fddc59cd6ede0c737d469f0c68ce2f50ac6fdeafa31937a002dad751ea2796ff" Jan 31 03:52:38 crc kubenswrapper[4827]: I0131 03:52:38.483087 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fddc59cd6ede0c737d469f0c68ce2f50ac6fdeafa31937a002dad751ea2796ff"} err="failed to get container status \"fddc59cd6ede0c737d469f0c68ce2f50ac6fdeafa31937a002dad751ea2796ff\": rpc error: code = NotFound desc = could not find container \"fddc59cd6ede0c737d469f0c68ce2f50ac6fdeafa31937a002dad751ea2796ff\": container with ID starting with fddc59cd6ede0c737d469f0c68ce2f50ac6fdeafa31937a002dad751ea2796ff not found: ID does not exist" Jan 31 03:52:38 crc kubenswrapper[4827]: I0131 03:52:38.483102 4827 scope.go:117] "RemoveContainer" containerID="c25518e1489adb33ab90a0334fb1369506341518ad3a608423bf26201789e099" Jan 31 03:52:38 crc kubenswrapper[4827]: E0131 03:52:38.483357 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c25518e1489adb33ab90a0334fb1369506341518ad3a608423bf26201789e099\": container with ID starting with c25518e1489adb33ab90a0334fb1369506341518ad3a608423bf26201789e099 not found: ID does not exist" containerID="c25518e1489adb33ab90a0334fb1369506341518ad3a608423bf26201789e099" Jan 31 03:52:38 crc kubenswrapper[4827]: I0131 03:52:38.483379 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c25518e1489adb33ab90a0334fb1369506341518ad3a608423bf26201789e099"} err="failed to get container status \"c25518e1489adb33ab90a0334fb1369506341518ad3a608423bf26201789e099\": rpc error: code = NotFound desc = could not find container \"c25518e1489adb33ab90a0334fb1369506341518ad3a608423bf26201789e099\": container with ID starting with c25518e1489adb33ab90a0334fb1369506341518ad3a608423bf26201789e099 not found: ID does not exist" Jan 31 03:52:40 crc kubenswrapper[4827]: I0131 03:52:40.119650 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3ede25d-6d79-44f7-a853-88b36723eb92" path="/var/lib/kubelet/pods/f3ede25d-6d79-44f7-a853-88b36723eb92/volumes" Jan 31 03:52:47 crc kubenswrapper[4827]: I0131 03:52:47.372038 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 03:52:47 crc kubenswrapper[4827]: I0131 03:52:47.372559 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.649356 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vx7v2"] Jan 31 03:52:54 crc kubenswrapper[4827]: E0131 03:52:54.651680 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.651844 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 03:52:54 crc kubenswrapper[4827]: E0131 03:52:54.651986 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94e2d804-29e9-4233-adda-45072b493f0f" containerName="registry-server" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.652088 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e2d804-29e9-4233-adda-45072b493f0f" containerName="registry-server" Jan 31 03:52:54 crc kubenswrapper[4827]: E0131 03:52:54.652189 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94e2d804-29e9-4233-adda-45072b493f0f" containerName="extract-content" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.652285 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e2d804-29e9-4233-adda-45072b493f0f" containerName="extract-content" Jan 31 03:52:54 crc kubenswrapper[4827]: E0131 03:52:54.652379 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3ede25d-6d79-44f7-a853-88b36723eb92" containerName="registry-server" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.652458 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3ede25d-6d79-44f7-a853-88b36723eb92" containerName="registry-server" Jan 31 03:52:54 crc kubenswrapper[4827]: E0131 03:52:54.652552 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3ede25d-6d79-44f7-a853-88b36723eb92" containerName="extract-utilities" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.652651 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3ede25d-6d79-44f7-a853-88b36723eb92" containerName="extract-utilities" Jan 31 03:52:54 crc kubenswrapper[4827]: E0131 03:52:54.652768 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3ede25d-6d79-44f7-a853-88b36723eb92" containerName="extract-content" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.652861 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3ede25d-6d79-44f7-a853-88b36723eb92" containerName="extract-content" Jan 31 03:52:54 crc kubenswrapper[4827]: E0131 03:52:54.652972 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94e2d804-29e9-4233-adda-45072b493f0f" containerName="extract-utilities" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.653057 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e2d804-29e9-4233-adda-45072b493f0f" containerName="extract-utilities" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.653295 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3ede25d-6d79-44f7-a853-88b36723eb92" containerName="registry-server" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.653425 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="94e2d804-29e9-4233-adda-45072b493f0f" containerName="registry-server" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.653553 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.654285 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.676774 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vx7v2"] Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.769197 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e77e7774-e380-48b8-a78d-dc8d7e48b9e6-registry-certificates\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.769270 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.769309 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e77e7774-e380-48b8-a78d-dc8d7e48b9e6-registry-tls\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.769438 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e77e7774-e380-48b8-a78d-dc8d7e48b9e6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.769613 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e77e7774-e380-48b8-a78d-dc8d7e48b9e6-trusted-ca\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.769688 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e77e7774-e380-48b8-a78d-dc8d7e48b9e6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.769757 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blpf2\" (UniqueName: \"kubernetes.io/projected/e77e7774-e380-48b8-a78d-dc8d7e48b9e6-kube-api-access-blpf2\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.769818 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e77e7774-e380-48b8-a78d-dc8d7e48b9e6-bound-sa-token\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.793792 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.882543 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e77e7774-e380-48b8-a78d-dc8d7e48b9e6-registry-certificates\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.883025 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e77e7774-e380-48b8-a78d-dc8d7e48b9e6-registry-tls\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.883066 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e77e7774-e380-48b8-a78d-dc8d7e48b9e6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.883148 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e77e7774-e380-48b8-a78d-dc8d7e48b9e6-trusted-ca\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.883194 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e77e7774-e380-48b8-a78d-dc8d7e48b9e6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.883241 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blpf2\" (UniqueName: \"kubernetes.io/projected/e77e7774-e380-48b8-a78d-dc8d7e48b9e6-kube-api-access-blpf2\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.883288 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e77e7774-e380-48b8-a78d-dc8d7e48b9e6-bound-sa-token\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.884463 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e77e7774-e380-48b8-a78d-dc8d7e48b9e6-registry-certificates\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.885417 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e77e7774-e380-48b8-a78d-dc8d7e48b9e6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.890959 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e77e7774-e380-48b8-a78d-dc8d7e48b9e6-trusted-ca\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.905036 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e77e7774-e380-48b8-a78d-dc8d7e48b9e6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.906870 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e77e7774-e380-48b8-a78d-dc8d7e48b9e6-registry-tls\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.917054 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e77e7774-e380-48b8-a78d-dc8d7e48b9e6-bound-sa-token\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.918526 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blpf2\" (UniqueName: \"kubernetes.io/projected/e77e7774-e380-48b8-a78d-dc8d7e48b9e6-kube-api-access-blpf2\") pod \"image-registry-66df7c8f76-vx7v2\" (UID: \"e77e7774-e380-48b8-a78d-dc8d7e48b9e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:54 crc kubenswrapper[4827]: I0131 03:52:54.974813 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:55 crc kubenswrapper[4827]: I0131 03:52:55.178153 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vx7v2"] Jan 31 03:52:55 crc kubenswrapper[4827]: I0131 03:52:55.560954 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" event={"ID":"e77e7774-e380-48b8-a78d-dc8d7e48b9e6","Type":"ContainerStarted","Data":"0f550bb13dd168d9470d241e70cc7568afe83f136d49ef5c2f79a21cfa27753c"} Jan 31 03:52:55 crc kubenswrapper[4827]: I0131 03:52:55.560997 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" event={"ID":"e77e7774-e380-48b8-a78d-dc8d7e48b9e6","Type":"ContainerStarted","Data":"fbf4a1d0d9a04a738dd63568e3b36e3fbe87a4e137ce16dedeccb4f45b0701bf"} Jan 31 03:52:55 crc kubenswrapper[4827]: I0131 03:52:55.561098 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:52:55 crc kubenswrapper[4827]: I0131 03:52:55.584985 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" podStartSLOduration=1.584960239 podStartE2EDuration="1.584960239s" podCreationTimestamp="2026-01-31 03:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:52:55.58436206 +0000 UTC m=+368.271442559" watchObservedRunningTime="2026-01-31 03:52:55.584960239 +0000 UTC m=+368.272040718" Jan 31 03:53:13 crc kubenswrapper[4827]: I0131 03:53:13.960040 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g76wz"] Jan 31 03:53:13 crc kubenswrapper[4827]: I0131 03:53:13.961686 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g76wz" podUID="36f2dbb1-6370-4a38-8702-edf89c8b4668" containerName="registry-server" containerID="cri-o://f18aea0bc5fef6afca2778b69d4c12ac7450de98d4c815c8ebee7c4750caeae8" gracePeriod=30 Jan 31 03:53:13 crc kubenswrapper[4827]: I0131 03:53:13.979872 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h54h5"] Jan 31 03:53:13 crc kubenswrapper[4827]: I0131 03:53:13.980251 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h54h5" podUID="a4c93e4f-eac3-4794-a748-51adfd8b961c" containerName="registry-server" containerID="cri-o://e8a7f5ce15813c0954ca71b668e53033e6c67721cea930e16b24f7ea94c5d62e" gracePeriod=30 Jan 31 03:53:13 crc kubenswrapper[4827]: I0131 03:53:13.989844 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lrw5m"] Jan 31 03:53:13 crc kubenswrapper[4827]: I0131 03:53:13.990307 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" podUID="cc0facf8-c192-4df4-bb9b-68f123fd7b21" containerName="marketplace-operator" containerID="cri-o://d740f1a78f7ff99bed83300155a58db8babf7a532fed3fc5bd7a9d2d7a117f0b" gracePeriod=30 Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.003811 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpjgt"] Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.004108 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jpjgt" podUID="b53b07cf-d0d5-4774-89fe-89765537cc9b" containerName="registry-server" containerID="cri-o://4b002fe259d37c8f59d3b86dfee22328d2570f86e9d6b05b608ba4da5601526a" gracePeriod=30 Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.020210 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8lngw"] Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.020534 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8lngw" podUID="e357c738-a2f2-49a3-b122-5fe5ab45b919" containerName="registry-server" containerID="cri-o://c55390b54817fdcfb6ec196e28943d5d6cde97684ff2a4b0670d517e2419076e" gracePeriod=30 Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.029466 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qg47f"] Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.030386 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qg47f" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.047495 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qg47f"] Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.212969 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14a103c0-b784-4634-9d0e-07cccc0795ef-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qg47f\" (UID: \"14a103c0-b784-4634-9d0e-07cccc0795ef\") " pod="openshift-marketplace/marketplace-operator-79b997595-qg47f" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.213633 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfkfv\" (UniqueName: \"kubernetes.io/projected/14a103c0-b784-4634-9d0e-07cccc0795ef-kube-api-access-nfkfv\") pod \"marketplace-operator-79b997595-qg47f\" (UID: \"14a103c0-b784-4634-9d0e-07cccc0795ef\") " pod="openshift-marketplace/marketplace-operator-79b997595-qg47f" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.213689 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/14a103c0-b784-4634-9d0e-07cccc0795ef-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qg47f\" (UID: \"14a103c0-b784-4634-9d0e-07cccc0795ef\") " pod="openshift-marketplace/marketplace-operator-79b997595-qg47f" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.317427 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfkfv\" (UniqueName: \"kubernetes.io/projected/14a103c0-b784-4634-9d0e-07cccc0795ef-kube-api-access-nfkfv\") pod \"marketplace-operator-79b997595-qg47f\" (UID: \"14a103c0-b784-4634-9d0e-07cccc0795ef\") " pod="openshift-marketplace/marketplace-operator-79b997595-qg47f" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.317487 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/14a103c0-b784-4634-9d0e-07cccc0795ef-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qg47f\" (UID: \"14a103c0-b784-4634-9d0e-07cccc0795ef\") " pod="openshift-marketplace/marketplace-operator-79b997595-qg47f" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.317509 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14a103c0-b784-4634-9d0e-07cccc0795ef-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qg47f\" (UID: \"14a103c0-b784-4634-9d0e-07cccc0795ef\") " pod="openshift-marketplace/marketplace-operator-79b997595-qg47f" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.319024 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14a103c0-b784-4634-9d0e-07cccc0795ef-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qg47f\" (UID: \"14a103c0-b784-4634-9d0e-07cccc0795ef\") " pod="openshift-marketplace/marketplace-operator-79b997595-qg47f" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.327606 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/14a103c0-b784-4634-9d0e-07cccc0795ef-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qg47f\" (UID: \"14a103c0-b784-4634-9d0e-07cccc0795ef\") " pod="openshift-marketplace/marketplace-operator-79b997595-qg47f" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.345688 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfkfv\" (UniqueName: \"kubernetes.io/projected/14a103c0-b784-4634-9d0e-07cccc0795ef-kube-api-access-nfkfv\") pod \"marketplace-operator-79b997595-qg47f\" (UID: \"14a103c0-b784-4634-9d0e-07cccc0795ef\") " pod="openshift-marketplace/marketplace-operator-79b997595-qg47f" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.416010 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qg47f" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.419567 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g76wz" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.424724 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h54h5" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.460804 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.461847 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8lngw" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.462403 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpjgt" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.519998 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-995cj\" (UniqueName: \"kubernetes.io/projected/36f2dbb1-6370-4a38-8702-edf89c8b4668-kube-api-access-995cj\") pod \"36f2dbb1-6370-4a38-8702-edf89c8b4668\" (UID: \"36f2dbb1-6370-4a38-8702-edf89c8b4668\") " Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.520042 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4c93e4f-eac3-4794-a748-51adfd8b961c-utilities\") pod \"a4c93e4f-eac3-4794-a748-51adfd8b961c\" (UID: \"a4c93e4f-eac3-4794-a748-51adfd8b961c\") " Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.520105 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36f2dbb1-6370-4a38-8702-edf89c8b4668-catalog-content\") pod \"36f2dbb1-6370-4a38-8702-edf89c8b4668\" (UID: \"36f2dbb1-6370-4a38-8702-edf89c8b4668\") " Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.520130 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7lw5\" (UniqueName: \"kubernetes.io/projected/a4c93e4f-eac3-4794-a748-51adfd8b961c-kube-api-access-m7lw5\") pod \"a4c93e4f-eac3-4794-a748-51adfd8b961c\" (UID: \"a4c93e4f-eac3-4794-a748-51adfd8b961c\") " Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.520193 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36f2dbb1-6370-4a38-8702-edf89c8b4668-utilities\") pod \"36f2dbb1-6370-4a38-8702-edf89c8b4668\" (UID: \"36f2dbb1-6370-4a38-8702-edf89c8b4668\") " Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.520213 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4c93e4f-eac3-4794-a748-51adfd8b961c-catalog-content\") pod \"a4c93e4f-eac3-4794-a748-51adfd8b961c\" (UID: \"a4c93e4f-eac3-4794-a748-51adfd8b961c\") " Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.523478 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4c93e4f-eac3-4794-a748-51adfd8b961c-utilities" (OuterVolumeSpecName: "utilities") pod "a4c93e4f-eac3-4794-a748-51adfd8b961c" (UID: "a4c93e4f-eac3-4794-a748-51adfd8b961c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.524848 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36f2dbb1-6370-4a38-8702-edf89c8b4668-utilities" (OuterVolumeSpecName: "utilities") pod "36f2dbb1-6370-4a38-8702-edf89c8b4668" (UID: "36f2dbb1-6370-4a38-8702-edf89c8b4668"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.529064 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4c93e4f-eac3-4794-a748-51adfd8b961c-kube-api-access-m7lw5" (OuterVolumeSpecName: "kube-api-access-m7lw5") pod "a4c93e4f-eac3-4794-a748-51adfd8b961c" (UID: "a4c93e4f-eac3-4794-a748-51adfd8b961c"). InnerVolumeSpecName "kube-api-access-m7lw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.529115 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36f2dbb1-6370-4a38-8702-edf89c8b4668-kube-api-access-995cj" (OuterVolumeSpecName: "kube-api-access-995cj") pod "36f2dbb1-6370-4a38-8702-edf89c8b4668" (UID: "36f2dbb1-6370-4a38-8702-edf89c8b4668"). InnerVolumeSpecName "kube-api-access-995cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.581050 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36f2dbb1-6370-4a38-8702-edf89c8b4668-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36f2dbb1-6370-4a38-8702-edf89c8b4668" (UID: "36f2dbb1-6370-4a38-8702-edf89c8b4668"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.619372 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4c93e4f-eac3-4794-a748-51adfd8b961c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4c93e4f-eac3-4794-a748-51adfd8b961c" (UID: "a4c93e4f-eac3-4794-a748-51adfd8b961c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.621776 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfdxt\" (UniqueName: \"kubernetes.io/projected/e357c738-a2f2-49a3-b122-5fe5ab45b919-kube-api-access-wfdxt\") pod \"e357c738-a2f2-49a3-b122-5fe5ab45b919\" (UID: \"e357c738-a2f2-49a3-b122-5fe5ab45b919\") " Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.621832 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e357c738-a2f2-49a3-b122-5fe5ab45b919-utilities\") pod \"e357c738-a2f2-49a3-b122-5fe5ab45b919\" (UID: \"e357c738-a2f2-49a3-b122-5fe5ab45b919\") " Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.621867 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cc0facf8-c192-4df4-bb9b-68f123fd7b21-marketplace-operator-metrics\") pod \"cc0facf8-c192-4df4-bb9b-68f123fd7b21\" (UID: \"cc0facf8-c192-4df4-bb9b-68f123fd7b21\") " Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.621916 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e357c738-a2f2-49a3-b122-5fe5ab45b919-catalog-content\") pod \"e357c738-a2f2-49a3-b122-5fe5ab45b919\" (UID: \"e357c738-a2f2-49a3-b122-5fe5ab45b919\") " Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.621948 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqt67\" (UniqueName: \"kubernetes.io/projected/b53b07cf-d0d5-4774-89fe-89765537cc9b-kube-api-access-xqt67\") pod \"b53b07cf-d0d5-4774-89fe-89765537cc9b\" (UID: \"b53b07cf-d0d5-4774-89fe-89765537cc9b\") " Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.621991 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b53b07cf-d0d5-4774-89fe-89765537cc9b-catalog-content\") pod \"b53b07cf-d0d5-4774-89fe-89765537cc9b\" (UID: \"b53b07cf-d0d5-4774-89fe-89765537cc9b\") " Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.622117 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b53b07cf-d0d5-4774-89fe-89765537cc9b-utilities\") pod \"b53b07cf-d0d5-4774-89fe-89765537cc9b\" (UID: \"b53b07cf-d0d5-4774-89fe-89765537cc9b\") " Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.622161 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc0facf8-c192-4df4-bb9b-68f123fd7b21-marketplace-trusted-ca\") pod \"cc0facf8-c192-4df4-bb9b-68f123fd7b21\" (UID: \"cc0facf8-c192-4df4-bb9b-68f123fd7b21\") " Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.622189 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76bww\" (UniqueName: \"kubernetes.io/projected/cc0facf8-c192-4df4-bb9b-68f123fd7b21-kube-api-access-76bww\") pod \"cc0facf8-c192-4df4-bb9b-68f123fd7b21\" (UID: \"cc0facf8-c192-4df4-bb9b-68f123fd7b21\") " Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.622462 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36f2dbb1-6370-4a38-8702-edf89c8b4668-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.622486 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4c93e4f-eac3-4794-a748-51adfd8b961c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.622501 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-995cj\" (UniqueName: \"kubernetes.io/projected/36f2dbb1-6370-4a38-8702-edf89c8b4668-kube-api-access-995cj\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.622513 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4c93e4f-eac3-4794-a748-51adfd8b961c-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.622524 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36f2dbb1-6370-4a38-8702-edf89c8b4668-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.622535 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7lw5\" (UniqueName: \"kubernetes.io/projected/a4c93e4f-eac3-4794-a748-51adfd8b961c-kube-api-access-m7lw5\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.623327 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e357c738-a2f2-49a3-b122-5fe5ab45b919-utilities" (OuterVolumeSpecName: "utilities") pod "e357c738-a2f2-49a3-b122-5fe5ab45b919" (UID: "e357c738-a2f2-49a3-b122-5fe5ab45b919"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.624245 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b53b07cf-d0d5-4774-89fe-89765537cc9b-utilities" (OuterVolumeSpecName: "utilities") pod "b53b07cf-d0d5-4774-89fe-89765537cc9b" (UID: "b53b07cf-d0d5-4774-89fe-89765537cc9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.624245 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc0facf8-c192-4df4-bb9b-68f123fd7b21-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "cc0facf8-c192-4df4-bb9b-68f123fd7b21" (UID: "cc0facf8-c192-4df4-bb9b-68f123fd7b21"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.626823 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e357c738-a2f2-49a3-b122-5fe5ab45b919-kube-api-access-wfdxt" (OuterVolumeSpecName: "kube-api-access-wfdxt") pod "e357c738-a2f2-49a3-b122-5fe5ab45b919" (UID: "e357c738-a2f2-49a3-b122-5fe5ab45b919"). InnerVolumeSpecName "kube-api-access-wfdxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.627082 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0facf8-c192-4df4-bb9b-68f123fd7b21-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "cc0facf8-c192-4df4-bb9b-68f123fd7b21" (UID: "cc0facf8-c192-4df4-bb9b-68f123fd7b21"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.628233 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b53b07cf-d0d5-4774-89fe-89765537cc9b-kube-api-access-xqt67" (OuterVolumeSpecName: "kube-api-access-xqt67") pod "b53b07cf-d0d5-4774-89fe-89765537cc9b" (UID: "b53b07cf-d0d5-4774-89fe-89765537cc9b"). InnerVolumeSpecName "kube-api-access-xqt67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.635617 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc0facf8-c192-4df4-bb9b-68f123fd7b21-kube-api-access-76bww" (OuterVolumeSpecName: "kube-api-access-76bww") pod "cc0facf8-c192-4df4-bb9b-68f123fd7b21" (UID: "cc0facf8-c192-4df4-bb9b-68f123fd7b21"). InnerVolumeSpecName "kube-api-access-76bww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.650174 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b53b07cf-d0d5-4774-89fe-89765537cc9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b53b07cf-d0d5-4774-89fe-89765537cc9b" (UID: "b53b07cf-d0d5-4774-89fe-89765537cc9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.664216 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qg47f"] Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.693131 4827 generic.go:334] "Generic (PLEG): container finished" podID="b53b07cf-d0d5-4774-89fe-89765537cc9b" containerID="4b002fe259d37c8f59d3b86dfee22328d2570f86e9d6b05b608ba4da5601526a" exitCode=0 Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.693189 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpjgt" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.693333 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpjgt" event={"ID":"b53b07cf-d0d5-4774-89fe-89765537cc9b","Type":"ContainerDied","Data":"4b002fe259d37c8f59d3b86dfee22328d2570f86e9d6b05b608ba4da5601526a"} Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.693390 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpjgt" event={"ID":"b53b07cf-d0d5-4774-89fe-89765537cc9b","Type":"ContainerDied","Data":"f9c1c237840d075483018504809f3e35cdb3db93d8bde70dd1facf82863b0958"} Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.693429 4827 scope.go:117] "RemoveContainer" containerID="4b002fe259d37c8f59d3b86dfee22328d2570f86e9d6b05b608ba4da5601526a" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.699382 4827 generic.go:334] "Generic (PLEG): container finished" podID="cc0facf8-c192-4df4-bb9b-68f123fd7b21" containerID="d740f1a78f7ff99bed83300155a58db8babf7a532fed3fc5bd7a9d2d7a117f0b" exitCode=0 Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.699475 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" event={"ID":"cc0facf8-c192-4df4-bb9b-68f123fd7b21","Type":"ContainerDied","Data":"d740f1a78f7ff99bed83300155a58db8babf7a532fed3fc5bd7a9d2d7a117f0b"} Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.699505 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" event={"ID":"cc0facf8-c192-4df4-bb9b-68f123fd7b21","Type":"ContainerDied","Data":"63222f319745a9995942369616315f9631733e9e4931e59290885351781d4d95"} Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.699498 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lrw5m" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.701587 4827 generic.go:334] "Generic (PLEG): container finished" podID="a4c93e4f-eac3-4794-a748-51adfd8b961c" containerID="e8a7f5ce15813c0954ca71b668e53033e6c67721cea930e16b24f7ea94c5d62e" exitCode=0 Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.701656 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h54h5" event={"ID":"a4c93e4f-eac3-4794-a748-51adfd8b961c","Type":"ContainerDied","Data":"e8a7f5ce15813c0954ca71b668e53033e6c67721cea930e16b24f7ea94c5d62e"} Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.701691 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h54h5" event={"ID":"a4c93e4f-eac3-4794-a748-51adfd8b961c","Type":"ContainerDied","Data":"758e955d86979f68ce3ffd3e195b09f7d2743170d7cfd368ceb62603ea60681c"} Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.701818 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h54h5" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.703791 4827 generic.go:334] "Generic (PLEG): container finished" podID="e357c738-a2f2-49a3-b122-5fe5ab45b919" containerID="c55390b54817fdcfb6ec196e28943d5d6cde97684ff2a4b0670d517e2419076e" exitCode=0 Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.703846 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lngw" event={"ID":"e357c738-a2f2-49a3-b122-5fe5ab45b919","Type":"ContainerDied","Data":"c55390b54817fdcfb6ec196e28943d5d6cde97684ff2a4b0670d517e2419076e"} Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.703872 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lngw" event={"ID":"e357c738-a2f2-49a3-b122-5fe5ab45b919","Type":"ContainerDied","Data":"b36482728ccf239567155a89ae212178c858aa59cf05d460d873a603eb6f6e38"} Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.703952 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8lngw" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.707895 4827 generic.go:334] "Generic (PLEG): container finished" podID="36f2dbb1-6370-4a38-8702-edf89c8b4668" containerID="f18aea0bc5fef6afca2778b69d4c12ac7450de98d4c815c8ebee7c4750caeae8" exitCode=0 Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.707992 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g76wz" event={"ID":"36f2dbb1-6370-4a38-8702-edf89c8b4668","Type":"ContainerDied","Data":"f18aea0bc5fef6afca2778b69d4c12ac7450de98d4c815c8ebee7c4750caeae8"} Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.708023 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g76wz" event={"ID":"36f2dbb1-6370-4a38-8702-edf89c8b4668","Type":"ContainerDied","Data":"fd49f6fda1c3dddeb8bd8294f20d4fd7d13fae8241d815d50ed8d8a4172051f4"} Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.708129 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g76wz" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.711297 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qg47f" event={"ID":"14a103c0-b784-4634-9d0e-07cccc0795ef","Type":"ContainerStarted","Data":"9e9f4c1f8b407b72250ba4de7ba2f25f1f507badfde71ea7f0bb72614e1d726b"} Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.711578 4827 scope.go:117] "RemoveContainer" containerID="a3b165593eca2c0b28e0f53f58266f9b90901ef98588e9b57aff0baddb04a7c5" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.733355 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpjgt"] Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.736122 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfdxt\" (UniqueName: \"kubernetes.io/projected/e357c738-a2f2-49a3-b122-5fe5ab45b919-kube-api-access-wfdxt\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.736180 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e357c738-a2f2-49a3-b122-5fe5ab45b919-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.736199 4827 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cc0facf8-c192-4df4-bb9b-68f123fd7b21-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.736216 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqt67\" (UniqueName: \"kubernetes.io/projected/b53b07cf-d0d5-4774-89fe-89765537cc9b-kube-api-access-xqt67\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.736229 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b53b07cf-d0d5-4774-89fe-89765537cc9b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.736243 4827 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc0facf8-c192-4df4-bb9b-68f123fd7b21-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.736258 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b53b07cf-d0d5-4774-89fe-89765537cc9b-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.736271 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76bww\" (UniqueName: \"kubernetes.io/projected/cc0facf8-c192-4df4-bb9b-68f123fd7b21-kube-api-access-76bww\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.742212 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpjgt"] Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.743083 4827 scope.go:117] "RemoveContainer" containerID="ee7bac41be598005dee07f1fd57a55f74ec8848805fe784d9c493917bc01c264" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.747177 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lrw5m"] Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.752531 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lrw5m"] Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.759262 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e357c738-a2f2-49a3-b122-5fe5ab45b919-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e357c738-a2f2-49a3-b122-5fe5ab45b919" (UID: "e357c738-a2f2-49a3-b122-5fe5ab45b919"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.762624 4827 scope.go:117] "RemoveContainer" containerID="4b002fe259d37c8f59d3b86dfee22328d2570f86e9d6b05b608ba4da5601526a" Jan 31 03:53:14 crc kubenswrapper[4827]: E0131 03:53:14.763086 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b002fe259d37c8f59d3b86dfee22328d2570f86e9d6b05b608ba4da5601526a\": container with ID starting with 4b002fe259d37c8f59d3b86dfee22328d2570f86e9d6b05b608ba4da5601526a not found: ID does not exist" containerID="4b002fe259d37c8f59d3b86dfee22328d2570f86e9d6b05b608ba4da5601526a" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.763115 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b002fe259d37c8f59d3b86dfee22328d2570f86e9d6b05b608ba4da5601526a"} err="failed to get container status \"4b002fe259d37c8f59d3b86dfee22328d2570f86e9d6b05b608ba4da5601526a\": rpc error: code = NotFound desc = could not find container \"4b002fe259d37c8f59d3b86dfee22328d2570f86e9d6b05b608ba4da5601526a\": container with ID starting with 4b002fe259d37c8f59d3b86dfee22328d2570f86e9d6b05b608ba4da5601526a not found: ID does not exist" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.763136 4827 scope.go:117] "RemoveContainer" containerID="a3b165593eca2c0b28e0f53f58266f9b90901ef98588e9b57aff0baddb04a7c5" Jan 31 03:53:14 crc kubenswrapper[4827]: E0131 03:53:14.763480 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3b165593eca2c0b28e0f53f58266f9b90901ef98588e9b57aff0baddb04a7c5\": container with ID starting with a3b165593eca2c0b28e0f53f58266f9b90901ef98588e9b57aff0baddb04a7c5 not found: ID does not exist" containerID="a3b165593eca2c0b28e0f53f58266f9b90901ef98588e9b57aff0baddb04a7c5" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.763496 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3b165593eca2c0b28e0f53f58266f9b90901ef98588e9b57aff0baddb04a7c5"} err="failed to get container status \"a3b165593eca2c0b28e0f53f58266f9b90901ef98588e9b57aff0baddb04a7c5\": rpc error: code = NotFound desc = could not find container \"a3b165593eca2c0b28e0f53f58266f9b90901ef98588e9b57aff0baddb04a7c5\": container with ID starting with a3b165593eca2c0b28e0f53f58266f9b90901ef98588e9b57aff0baddb04a7c5 not found: ID does not exist" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.763506 4827 scope.go:117] "RemoveContainer" containerID="ee7bac41be598005dee07f1fd57a55f74ec8848805fe784d9c493917bc01c264" Jan 31 03:53:14 crc kubenswrapper[4827]: E0131 03:53:14.764473 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee7bac41be598005dee07f1fd57a55f74ec8848805fe784d9c493917bc01c264\": container with ID starting with ee7bac41be598005dee07f1fd57a55f74ec8848805fe784d9c493917bc01c264 not found: ID does not exist" containerID="ee7bac41be598005dee07f1fd57a55f74ec8848805fe784d9c493917bc01c264" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.764499 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee7bac41be598005dee07f1fd57a55f74ec8848805fe784d9c493917bc01c264"} err="failed to get container status \"ee7bac41be598005dee07f1fd57a55f74ec8848805fe784d9c493917bc01c264\": rpc error: code = NotFound desc = could not find container \"ee7bac41be598005dee07f1fd57a55f74ec8848805fe784d9c493917bc01c264\": container with ID starting with ee7bac41be598005dee07f1fd57a55f74ec8848805fe784d9c493917bc01c264 not found: ID does not exist" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.764519 4827 scope.go:117] "RemoveContainer" containerID="d740f1a78f7ff99bed83300155a58db8babf7a532fed3fc5bd7a9d2d7a117f0b" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.778829 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g76wz"] Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.781651 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g76wz"] Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.789693 4827 scope.go:117] "RemoveContainer" containerID="e293ec72b63f1393cfb46546ee0e94f82827ea624e218e5e1cdf8e73c91516d7" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.793151 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h54h5"] Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.795917 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h54h5"] Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.806087 4827 scope.go:117] "RemoveContainer" containerID="d740f1a78f7ff99bed83300155a58db8babf7a532fed3fc5bd7a9d2d7a117f0b" Jan 31 03:53:14 crc kubenswrapper[4827]: E0131 03:53:14.806497 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d740f1a78f7ff99bed83300155a58db8babf7a532fed3fc5bd7a9d2d7a117f0b\": container with ID starting with d740f1a78f7ff99bed83300155a58db8babf7a532fed3fc5bd7a9d2d7a117f0b not found: ID does not exist" containerID="d740f1a78f7ff99bed83300155a58db8babf7a532fed3fc5bd7a9d2d7a117f0b" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.806540 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d740f1a78f7ff99bed83300155a58db8babf7a532fed3fc5bd7a9d2d7a117f0b"} err="failed to get container status \"d740f1a78f7ff99bed83300155a58db8babf7a532fed3fc5bd7a9d2d7a117f0b\": rpc error: code = NotFound desc = could not find container \"d740f1a78f7ff99bed83300155a58db8babf7a532fed3fc5bd7a9d2d7a117f0b\": container with ID starting with d740f1a78f7ff99bed83300155a58db8babf7a532fed3fc5bd7a9d2d7a117f0b not found: ID does not exist" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.806577 4827 scope.go:117] "RemoveContainer" containerID="e293ec72b63f1393cfb46546ee0e94f82827ea624e218e5e1cdf8e73c91516d7" Jan 31 03:53:14 crc kubenswrapper[4827]: E0131 03:53:14.806864 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e293ec72b63f1393cfb46546ee0e94f82827ea624e218e5e1cdf8e73c91516d7\": container with ID starting with e293ec72b63f1393cfb46546ee0e94f82827ea624e218e5e1cdf8e73c91516d7 not found: ID does not exist" containerID="e293ec72b63f1393cfb46546ee0e94f82827ea624e218e5e1cdf8e73c91516d7" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.806911 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e293ec72b63f1393cfb46546ee0e94f82827ea624e218e5e1cdf8e73c91516d7"} err="failed to get container status \"e293ec72b63f1393cfb46546ee0e94f82827ea624e218e5e1cdf8e73c91516d7\": rpc error: code = NotFound desc = could not find container \"e293ec72b63f1393cfb46546ee0e94f82827ea624e218e5e1cdf8e73c91516d7\": container with ID starting with e293ec72b63f1393cfb46546ee0e94f82827ea624e218e5e1cdf8e73c91516d7 not found: ID does not exist" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.806949 4827 scope.go:117] "RemoveContainer" containerID="e8a7f5ce15813c0954ca71b668e53033e6c67721cea930e16b24f7ea94c5d62e" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.818698 4827 scope.go:117] "RemoveContainer" containerID="cefb1f406a66db5d53501aee9ef5748c2f42b32ec3b198243876a8c5f0042e6c" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.835114 4827 scope.go:117] "RemoveContainer" containerID="ca43bc891aac1130290b11b161fdc1b14a06c9f39b9676d52b481529a3572a03" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.837112 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e357c738-a2f2-49a3-b122-5fe5ab45b919-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.853726 4827 scope.go:117] "RemoveContainer" containerID="e8a7f5ce15813c0954ca71b668e53033e6c67721cea930e16b24f7ea94c5d62e" Jan 31 03:53:14 crc kubenswrapper[4827]: E0131 03:53:14.854651 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8a7f5ce15813c0954ca71b668e53033e6c67721cea930e16b24f7ea94c5d62e\": container with ID starting with e8a7f5ce15813c0954ca71b668e53033e6c67721cea930e16b24f7ea94c5d62e not found: ID does not exist" containerID="e8a7f5ce15813c0954ca71b668e53033e6c67721cea930e16b24f7ea94c5d62e" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.854691 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8a7f5ce15813c0954ca71b668e53033e6c67721cea930e16b24f7ea94c5d62e"} err="failed to get container status \"e8a7f5ce15813c0954ca71b668e53033e6c67721cea930e16b24f7ea94c5d62e\": rpc error: code = NotFound desc = could not find container \"e8a7f5ce15813c0954ca71b668e53033e6c67721cea930e16b24f7ea94c5d62e\": container with ID starting with e8a7f5ce15813c0954ca71b668e53033e6c67721cea930e16b24f7ea94c5d62e not found: ID does not exist" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.854718 4827 scope.go:117] "RemoveContainer" containerID="cefb1f406a66db5d53501aee9ef5748c2f42b32ec3b198243876a8c5f0042e6c" Jan 31 03:53:14 crc kubenswrapper[4827]: E0131 03:53:14.855391 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cefb1f406a66db5d53501aee9ef5748c2f42b32ec3b198243876a8c5f0042e6c\": container with ID starting with cefb1f406a66db5d53501aee9ef5748c2f42b32ec3b198243876a8c5f0042e6c not found: ID does not exist" containerID="cefb1f406a66db5d53501aee9ef5748c2f42b32ec3b198243876a8c5f0042e6c" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.855518 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cefb1f406a66db5d53501aee9ef5748c2f42b32ec3b198243876a8c5f0042e6c"} err="failed to get container status \"cefb1f406a66db5d53501aee9ef5748c2f42b32ec3b198243876a8c5f0042e6c\": rpc error: code = NotFound desc = could not find container \"cefb1f406a66db5d53501aee9ef5748c2f42b32ec3b198243876a8c5f0042e6c\": container with ID starting with cefb1f406a66db5d53501aee9ef5748c2f42b32ec3b198243876a8c5f0042e6c not found: ID does not exist" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.855635 4827 scope.go:117] "RemoveContainer" containerID="ca43bc891aac1130290b11b161fdc1b14a06c9f39b9676d52b481529a3572a03" Jan 31 03:53:14 crc kubenswrapper[4827]: E0131 03:53:14.856063 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca43bc891aac1130290b11b161fdc1b14a06c9f39b9676d52b481529a3572a03\": container with ID starting with ca43bc891aac1130290b11b161fdc1b14a06c9f39b9676d52b481529a3572a03 not found: ID does not exist" containerID="ca43bc891aac1130290b11b161fdc1b14a06c9f39b9676d52b481529a3572a03" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.856189 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca43bc891aac1130290b11b161fdc1b14a06c9f39b9676d52b481529a3572a03"} err="failed to get container status \"ca43bc891aac1130290b11b161fdc1b14a06c9f39b9676d52b481529a3572a03\": rpc error: code = NotFound desc = could not find container \"ca43bc891aac1130290b11b161fdc1b14a06c9f39b9676d52b481529a3572a03\": container with ID starting with ca43bc891aac1130290b11b161fdc1b14a06c9f39b9676d52b481529a3572a03 not found: ID does not exist" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.856285 4827 scope.go:117] "RemoveContainer" containerID="c55390b54817fdcfb6ec196e28943d5d6cde97684ff2a4b0670d517e2419076e" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.871332 4827 scope.go:117] "RemoveContainer" containerID="1fb3b10337cb42607cc42f2506f9009dcf266e7d569d52e8986c9b2093dfd355" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.893011 4827 scope.go:117] "RemoveContainer" containerID="fcd58d253e0b8db6249e4558617cad1b2d2180c78f508194665a8a8928e4eb88" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.908405 4827 scope.go:117] "RemoveContainer" containerID="c55390b54817fdcfb6ec196e28943d5d6cde97684ff2a4b0670d517e2419076e" Jan 31 03:53:14 crc kubenswrapper[4827]: E0131 03:53:14.908957 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c55390b54817fdcfb6ec196e28943d5d6cde97684ff2a4b0670d517e2419076e\": container with ID starting with c55390b54817fdcfb6ec196e28943d5d6cde97684ff2a4b0670d517e2419076e not found: ID does not exist" containerID="c55390b54817fdcfb6ec196e28943d5d6cde97684ff2a4b0670d517e2419076e" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.909000 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c55390b54817fdcfb6ec196e28943d5d6cde97684ff2a4b0670d517e2419076e"} err="failed to get container status \"c55390b54817fdcfb6ec196e28943d5d6cde97684ff2a4b0670d517e2419076e\": rpc error: code = NotFound desc = could not find container \"c55390b54817fdcfb6ec196e28943d5d6cde97684ff2a4b0670d517e2419076e\": container with ID starting with c55390b54817fdcfb6ec196e28943d5d6cde97684ff2a4b0670d517e2419076e not found: ID does not exist" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.909028 4827 scope.go:117] "RemoveContainer" containerID="1fb3b10337cb42607cc42f2506f9009dcf266e7d569d52e8986c9b2093dfd355" Jan 31 03:53:14 crc kubenswrapper[4827]: E0131 03:53:14.909392 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fb3b10337cb42607cc42f2506f9009dcf266e7d569d52e8986c9b2093dfd355\": container with ID starting with 1fb3b10337cb42607cc42f2506f9009dcf266e7d569d52e8986c9b2093dfd355 not found: ID does not exist" containerID="1fb3b10337cb42607cc42f2506f9009dcf266e7d569d52e8986c9b2093dfd355" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.909419 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fb3b10337cb42607cc42f2506f9009dcf266e7d569d52e8986c9b2093dfd355"} err="failed to get container status \"1fb3b10337cb42607cc42f2506f9009dcf266e7d569d52e8986c9b2093dfd355\": rpc error: code = NotFound desc = could not find container \"1fb3b10337cb42607cc42f2506f9009dcf266e7d569d52e8986c9b2093dfd355\": container with ID starting with 1fb3b10337cb42607cc42f2506f9009dcf266e7d569d52e8986c9b2093dfd355 not found: ID does not exist" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.909440 4827 scope.go:117] "RemoveContainer" containerID="fcd58d253e0b8db6249e4558617cad1b2d2180c78f508194665a8a8928e4eb88" Jan 31 03:53:14 crc kubenswrapper[4827]: E0131 03:53:14.909734 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcd58d253e0b8db6249e4558617cad1b2d2180c78f508194665a8a8928e4eb88\": container with ID starting with fcd58d253e0b8db6249e4558617cad1b2d2180c78f508194665a8a8928e4eb88 not found: ID does not exist" containerID="fcd58d253e0b8db6249e4558617cad1b2d2180c78f508194665a8a8928e4eb88" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.909760 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcd58d253e0b8db6249e4558617cad1b2d2180c78f508194665a8a8928e4eb88"} err="failed to get container status \"fcd58d253e0b8db6249e4558617cad1b2d2180c78f508194665a8a8928e4eb88\": rpc error: code = NotFound desc = could not find container \"fcd58d253e0b8db6249e4558617cad1b2d2180c78f508194665a8a8928e4eb88\": container with ID starting with fcd58d253e0b8db6249e4558617cad1b2d2180c78f508194665a8a8928e4eb88 not found: ID does not exist" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.909780 4827 scope.go:117] "RemoveContainer" containerID="f18aea0bc5fef6afca2778b69d4c12ac7450de98d4c815c8ebee7c4750caeae8" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.928155 4827 scope.go:117] "RemoveContainer" containerID="0639eb7d7ec757bcd7bc78db440ef5affeb5d605f50bceb630b406f331b5cf35" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.948548 4827 scope.go:117] "RemoveContainer" containerID="aab2e5746993f3c43392059b3ef93d34a7eed7e2e15d13dc0dd7c0daaae2c51d" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.969074 4827 scope.go:117] "RemoveContainer" containerID="f18aea0bc5fef6afca2778b69d4c12ac7450de98d4c815c8ebee7c4750caeae8" Jan 31 03:53:14 crc kubenswrapper[4827]: E0131 03:53:14.969551 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f18aea0bc5fef6afca2778b69d4c12ac7450de98d4c815c8ebee7c4750caeae8\": container with ID starting with f18aea0bc5fef6afca2778b69d4c12ac7450de98d4c815c8ebee7c4750caeae8 not found: ID does not exist" containerID="f18aea0bc5fef6afca2778b69d4c12ac7450de98d4c815c8ebee7c4750caeae8" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.969594 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f18aea0bc5fef6afca2778b69d4c12ac7450de98d4c815c8ebee7c4750caeae8"} err="failed to get container status \"f18aea0bc5fef6afca2778b69d4c12ac7450de98d4c815c8ebee7c4750caeae8\": rpc error: code = NotFound desc = could not find container \"f18aea0bc5fef6afca2778b69d4c12ac7450de98d4c815c8ebee7c4750caeae8\": container with ID starting with f18aea0bc5fef6afca2778b69d4c12ac7450de98d4c815c8ebee7c4750caeae8 not found: ID does not exist" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.969622 4827 scope.go:117] "RemoveContainer" containerID="0639eb7d7ec757bcd7bc78db440ef5affeb5d605f50bceb630b406f331b5cf35" Jan 31 03:53:14 crc kubenswrapper[4827]: E0131 03:53:14.970704 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0639eb7d7ec757bcd7bc78db440ef5affeb5d605f50bceb630b406f331b5cf35\": container with ID starting with 0639eb7d7ec757bcd7bc78db440ef5affeb5d605f50bceb630b406f331b5cf35 not found: ID does not exist" containerID="0639eb7d7ec757bcd7bc78db440ef5affeb5d605f50bceb630b406f331b5cf35" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.970731 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0639eb7d7ec757bcd7bc78db440ef5affeb5d605f50bceb630b406f331b5cf35"} err="failed to get container status \"0639eb7d7ec757bcd7bc78db440ef5affeb5d605f50bceb630b406f331b5cf35\": rpc error: code = NotFound desc = could not find container \"0639eb7d7ec757bcd7bc78db440ef5affeb5d605f50bceb630b406f331b5cf35\": container with ID starting with 0639eb7d7ec757bcd7bc78db440ef5affeb5d605f50bceb630b406f331b5cf35 not found: ID does not exist" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.970750 4827 scope.go:117] "RemoveContainer" containerID="aab2e5746993f3c43392059b3ef93d34a7eed7e2e15d13dc0dd7c0daaae2c51d" Jan 31 03:53:14 crc kubenswrapper[4827]: E0131 03:53:14.974481 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aab2e5746993f3c43392059b3ef93d34a7eed7e2e15d13dc0dd7c0daaae2c51d\": container with ID starting with aab2e5746993f3c43392059b3ef93d34a7eed7e2e15d13dc0dd7c0daaae2c51d not found: ID does not exist" containerID="aab2e5746993f3c43392059b3ef93d34a7eed7e2e15d13dc0dd7c0daaae2c51d" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.974511 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aab2e5746993f3c43392059b3ef93d34a7eed7e2e15d13dc0dd7c0daaae2c51d"} err="failed to get container status \"aab2e5746993f3c43392059b3ef93d34a7eed7e2e15d13dc0dd7c0daaae2c51d\": rpc error: code = NotFound desc = could not find container \"aab2e5746993f3c43392059b3ef93d34a7eed7e2e15d13dc0dd7c0daaae2c51d\": container with ID starting with aab2e5746993f3c43392059b3ef93d34a7eed7e2e15d13dc0dd7c0daaae2c51d not found: ID does not exist" Jan 31 03:53:14 crc kubenswrapper[4827]: I0131 03:53:14.980184 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-vx7v2" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.034340 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s7psb"] Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.070871 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8lngw"] Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.078815 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8lngw"] Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.373119 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7krl2"] Jan 31 03:53:15 crc kubenswrapper[4827]: E0131 03:53:15.373693 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc0facf8-c192-4df4-bb9b-68f123fd7b21" containerName="marketplace-operator" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.373709 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0facf8-c192-4df4-bb9b-68f123fd7b21" containerName="marketplace-operator" Jan 31 03:53:15 crc kubenswrapper[4827]: E0131 03:53:15.373720 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e357c738-a2f2-49a3-b122-5fe5ab45b919" containerName="extract-utilities" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.373728 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e357c738-a2f2-49a3-b122-5fe5ab45b919" containerName="extract-utilities" Jan 31 03:53:15 crc kubenswrapper[4827]: E0131 03:53:15.373745 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e357c738-a2f2-49a3-b122-5fe5ab45b919" containerName="registry-server" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.373755 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e357c738-a2f2-49a3-b122-5fe5ab45b919" containerName="registry-server" Jan 31 03:53:15 crc kubenswrapper[4827]: E0131 03:53:15.373770 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f2dbb1-6370-4a38-8702-edf89c8b4668" containerName="registry-server" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.373780 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f2dbb1-6370-4a38-8702-edf89c8b4668" containerName="registry-server" Jan 31 03:53:15 crc kubenswrapper[4827]: E0131 03:53:15.373789 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f2dbb1-6370-4a38-8702-edf89c8b4668" containerName="extract-utilities" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.373797 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f2dbb1-6370-4a38-8702-edf89c8b4668" containerName="extract-utilities" Jan 31 03:53:15 crc kubenswrapper[4827]: E0131 03:53:15.373809 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b53b07cf-d0d5-4774-89fe-89765537cc9b" containerName="extract-utilities" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.373817 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b53b07cf-d0d5-4774-89fe-89765537cc9b" containerName="extract-utilities" Jan 31 03:53:15 crc kubenswrapper[4827]: E0131 03:53:15.373827 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e357c738-a2f2-49a3-b122-5fe5ab45b919" containerName="extract-content" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.373835 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e357c738-a2f2-49a3-b122-5fe5ab45b919" containerName="extract-content" Jan 31 03:53:15 crc kubenswrapper[4827]: E0131 03:53:15.373852 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4c93e4f-eac3-4794-a748-51adfd8b961c" containerName="registry-server" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.373860 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4c93e4f-eac3-4794-a748-51adfd8b961c" containerName="registry-server" Jan 31 03:53:15 crc kubenswrapper[4827]: E0131 03:53:15.373869 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f2dbb1-6370-4a38-8702-edf89c8b4668" containerName="extract-content" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.373896 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f2dbb1-6370-4a38-8702-edf89c8b4668" containerName="extract-content" Jan 31 03:53:15 crc kubenswrapper[4827]: E0131 03:53:15.373906 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b53b07cf-d0d5-4774-89fe-89765537cc9b" containerName="extract-content" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.373915 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b53b07cf-d0d5-4774-89fe-89765537cc9b" containerName="extract-content" Jan 31 03:53:15 crc kubenswrapper[4827]: E0131 03:53:15.373926 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4c93e4f-eac3-4794-a748-51adfd8b961c" containerName="extract-content" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.373934 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4c93e4f-eac3-4794-a748-51adfd8b961c" containerName="extract-content" Jan 31 03:53:15 crc kubenswrapper[4827]: E0131 03:53:15.373943 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4c93e4f-eac3-4794-a748-51adfd8b961c" containerName="extract-utilities" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.373953 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4c93e4f-eac3-4794-a748-51adfd8b961c" containerName="extract-utilities" Jan 31 03:53:15 crc kubenswrapper[4827]: E0131 03:53:15.373963 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b53b07cf-d0d5-4774-89fe-89765537cc9b" containerName="registry-server" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.373972 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b53b07cf-d0d5-4774-89fe-89765537cc9b" containerName="registry-server" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.374102 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f2dbb1-6370-4a38-8702-edf89c8b4668" containerName="registry-server" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.374113 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc0facf8-c192-4df4-bb9b-68f123fd7b21" containerName="marketplace-operator" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.374124 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e357c738-a2f2-49a3-b122-5fe5ab45b919" containerName="registry-server" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.374136 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc0facf8-c192-4df4-bb9b-68f123fd7b21" containerName="marketplace-operator" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.374146 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="b53b07cf-d0d5-4774-89fe-89765537cc9b" containerName="registry-server" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.374159 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4c93e4f-eac3-4794-a748-51adfd8b961c" containerName="registry-server" Jan 31 03:53:15 crc kubenswrapper[4827]: E0131 03:53:15.374273 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc0facf8-c192-4df4-bb9b-68f123fd7b21" containerName="marketplace-operator" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.374283 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0facf8-c192-4df4-bb9b-68f123fd7b21" containerName="marketplace-operator" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.375054 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7krl2" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.377809 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.392562 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7krl2"] Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.546404 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg8lc\" (UniqueName: \"kubernetes.io/projected/c69e48aa-b820-4027-a322-cca18339d441-kube-api-access-mg8lc\") pod \"certified-operators-7krl2\" (UID: \"c69e48aa-b820-4027-a322-cca18339d441\") " pod="openshift-marketplace/certified-operators-7krl2" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.546472 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c69e48aa-b820-4027-a322-cca18339d441-catalog-content\") pod \"certified-operators-7krl2\" (UID: \"c69e48aa-b820-4027-a322-cca18339d441\") " pod="openshift-marketplace/certified-operators-7krl2" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.546526 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c69e48aa-b820-4027-a322-cca18339d441-utilities\") pod \"certified-operators-7krl2\" (UID: \"c69e48aa-b820-4027-a322-cca18339d441\") " pod="openshift-marketplace/certified-operators-7krl2" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.648816 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c69e48aa-b820-4027-a322-cca18339d441-catalog-content\") pod \"certified-operators-7krl2\" (UID: \"c69e48aa-b820-4027-a322-cca18339d441\") " pod="openshift-marketplace/certified-operators-7krl2" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.649026 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c69e48aa-b820-4027-a322-cca18339d441-utilities\") pod \"certified-operators-7krl2\" (UID: \"c69e48aa-b820-4027-a322-cca18339d441\") " pod="openshift-marketplace/certified-operators-7krl2" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.649184 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg8lc\" (UniqueName: \"kubernetes.io/projected/c69e48aa-b820-4027-a322-cca18339d441-kube-api-access-mg8lc\") pod \"certified-operators-7krl2\" (UID: \"c69e48aa-b820-4027-a322-cca18339d441\") " pod="openshift-marketplace/certified-operators-7krl2" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.649185 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c69e48aa-b820-4027-a322-cca18339d441-catalog-content\") pod \"certified-operators-7krl2\" (UID: \"c69e48aa-b820-4027-a322-cca18339d441\") " pod="openshift-marketplace/certified-operators-7krl2" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.649404 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c69e48aa-b820-4027-a322-cca18339d441-utilities\") pod \"certified-operators-7krl2\" (UID: \"c69e48aa-b820-4027-a322-cca18339d441\") " pod="openshift-marketplace/certified-operators-7krl2" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.679913 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg8lc\" (UniqueName: \"kubernetes.io/projected/c69e48aa-b820-4027-a322-cca18339d441-kube-api-access-mg8lc\") pod \"certified-operators-7krl2\" (UID: \"c69e48aa-b820-4027-a322-cca18339d441\") " pod="openshift-marketplace/certified-operators-7krl2" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.690326 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7krl2" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.728258 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qg47f" event={"ID":"14a103c0-b784-4634-9d0e-07cccc0795ef","Type":"ContainerStarted","Data":"f26b61dcbe86b73c89023cc4793a68674004f9b41e18f6516ba1484598704274"} Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.729530 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qg47f" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.736824 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qg47f" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.750446 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qg47f" podStartSLOduration=1.7504289960000001 podStartE2EDuration="1.750428996s" podCreationTimestamp="2026-01-31 03:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:53:15.745695714 +0000 UTC m=+388.432776173" watchObservedRunningTime="2026-01-31 03:53:15.750428996 +0000 UTC m=+388.437509465" Jan 31 03:53:15 crc kubenswrapper[4827]: I0131 03:53:15.884270 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7krl2"] Jan 31 03:53:16 crc kubenswrapper[4827]: I0131 03:53:16.117665 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36f2dbb1-6370-4a38-8702-edf89c8b4668" path="/var/lib/kubelet/pods/36f2dbb1-6370-4a38-8702-edf89c8b4668/volumes" Jan 31 03:53:16 crc kubenswrapper[4827]: I0131 03:53:16.118651 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4c93e4f-eac3-4794-a748-51adfd8b961c" path="/var/lib/kubelet/pods/a4c93e4f-eac3-4794-a748-51adfd8b961c/volumes" Jan 31 03:53:16 crc kubenswrapper[4827]: I0131 03:53:16.119239 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b53b07cf-d0d5-4774-89fe-89765537cc9b" path="/var/lib/kubelet/pods/b53b07cf-d0d5-4774-89fe-89765537cc9b/volumes" Jan 31 03:53:16 crc kubenswrapper[4827]: I0131 03:53:16.120272 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc0facf8-c192-4df4-bb9b-68f123fd7b21" path="/var/lib/kubelet/pods/cc0facf8-c192-4df4-bb9b-68f123fd7b21/volumes" Jan 31 03:53:16 crc kubenswrapper[4827]: I0131 03:53:16.120715 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e357c738-a2f2-49a3-b122-5fe5ab45b919" path="/var/lib/kubelet/pods/e357c738-a2f2-49a3-b122-5fe5ab45b919/volumes" Jan 31 03:53:16 crc kubenswrapper[4827]: I0131 03:53:16.741876 4827 generic.go:334] "Generic (PLEG): container finished" podID="c69e48aa-b820-4027-a322-cca18339d441" containerID="e14c298674c6b09dcf1354c109f8af56f600a5f54d459b67f9c7cd7808365298" exitCode=0 Jan 31 03:53:16 crc kubenswrapper[4827]: I0131 03:53:16.742134 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7krl2" event={"ID":"c69e48aa-b820-4027-a322-cca18339d441","Type":"ContainerDied","Data":"e14c298674c6b09dcf1354c109f8af56f600a5f54d459b67f9c7cd7808365298"} Jan 31 03:53:16 crc kubenswrapper[4827]: I0131 03:53:16.744055 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7krl2" event={"ID":"c69e48aa-b820-4027-a322-cca18339d441","Type":"ContainerStarted","Data":"13818c96adc9fd19a3cb1411d4e58256d45c4840aa578263ce0aa36dc87b8888"} Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.184932 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sgpz2"] Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.186031 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sgpz2" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.189151 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.189804 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sgpz2"] Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.285173 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h9h6\" (UniqueName: \"kubernetes.io/projected/9405c6d0-837d-47f0-be6c-79518c22405d-kube-api-access-5h9h6\") pod \"redhat-marketplace-sgpz2\" (UID: \"9405c6d0-837d-47f0-be6c-79518c22405d\") " pod="openshift-marketplace/redhat-marketplace-sgpz2" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.286091 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9405c6d0-837d-47f0-be6c-79518c22405d-catalog-content\") pod \"redhat-marketplace-sgpz2\" (UID: \"9405c6d0-837d-47f0-be6c-79518c22405d\") " pod="openshift-marketplace/redhat-marketplace-sgpz2" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.286385 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9405c6d0-837d-47f0-be6c-79518c22405d-utilities\") pod \"redhat-marketplace-sgpz2\" (UID: \"9405c6d0-837d-47f0-be6c-79518c22405d\") " pod="openshift-marketplace/redhat-marketplace-sgpz2" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.371066 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.371116 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.387756 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h9h6\" (UniqueName: \"kubernetes.io/projected/9405c6d0-837d-47f0-be6c-79518c22405d-kube-api-access-5h9h6\") pod \"redhat-marketplace-sgpz2\" (UID: \"9405c6d0-837d-47f0-be6c-79518c22405d\") " pod="openshift-marketplace/redhat-marketplace-sgpz2" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.387832 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9405c6d0-837d-47f0-be6c-79518c22405d-catalog-content\") pod \"redhat-marketplace-sgpz2\" (UID: \"9405c6d0-837d-47f0-be6c-79518c22405d\") " pod="openshift-marketplace/redhat-marketplace-sgpz2" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.387860 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9405c6d0-837d-47f0-be6c-79518c22405d-utilities\") pod \"redhat-marketplace-sgpz2\" (UID: \"9405c6d0-837d-47f0-be6c-79518c22405d\") " pod="openshift-marketplace/redhat-marketplace-sgpz2" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.388283 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9405c6d0-837d-47f0-be6c-79518c22405d-utilities\") pod \"redhat-marketplace-sgpz2\" (UID: \"9405c6d0-837d-47f0-be6c-79518c22405d\") " pod="openshift-marketplace/redhat-marketplace-sgpz2" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.388723 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9405c6d0-837d-47f0-be6c-79518c22405d-catalog-content\") pod \"redhat-marketplace-sgpz2\" (UID: \"9405c6d0-837d-47f0-be6c-79518c22405d\") " pod="openshift-marketplace/redhat-marketplace-sgpz2" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.408378 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h9h6\" (UniqueName: \"kubernetes.io/projected/9405c6d0-837d-47f0-be6c-79518c22405d-kube-api-access-5h9h6\") pod \"redhat-marketplace-sgpz2\" (UID: \"9405c6d0-837d-47f0-be6c-79518c22405d\") " pod="openshift-marketplace/redhat-marketplace-sgpz2" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.503478 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sgpz2" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.751230 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7krl2" event={"ID":"c69e48aa-b820-4027-a322-cca18339d441","Type":"ContainerStarted","Data":"c7d63976c9c9acd00b00c3522e5299a0620f10abfebbccdb1ab754915e5efd4b"} Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.767275 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sgpz2"] Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.775918 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pnr9t"] Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.778406 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pnr9t" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.782345 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.787526 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pnr9t"] Jan 31 03:53:17 crc kubenswrapper[4827]: W0131 03:53:17.792356 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9405c6d0_837d_47f0_be6c_79518c22405d.slice/crio-855b18b34bdbd43d8a3316f18f6c333c3473eb8c5a9c83011be1714d04e3d098 WatchSource:0}: Error finding container 855b18b34bdbd43d8a3316f18f6c333c3473eb8c5a9c83011be1714d04e3d098: Status 404 returned error can't find the container with id 855b18b34bdbd43d8a3316f18f6c333c3473eb8c5a9c83011be1714d04e3d098 Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.893233 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77ae4727-de92-4d11-b951-9b2a734acc65-catalog-content\") pod \"redhat-operators-pnr9t\" (UID: \"77ae4727-de92-4d11-b951-9b2a734acc65\") " pod="openshift-marketplace/redhat-operators-pnr9t" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.893280 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77ae4727-de92-4d11-b951-9b2a734acc65-utilities\") pod \"redhat-operators-pnr9t\" (UID: \"77ae4727-de92-4d11-b951-9b2a734acc65\") " pod="openshift-marketplace/redhat-operators-pnr9t" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.893332 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95bmt\" (UniqueName: \"kubernetes.io/projected/77ae4727-de92-4d11-b951-9b2a734acc65-kube-api-access-95bmt\") pod \"redhat-operators-pnr9t\" (UID: \"77ae4727-de92-4d11-b951-9b2a734acc65\") " pod="openshift-marketplace/redhat-operators-pnr9t" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.994867 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95bmt\" (UniqueName: \"kubernetes.io/projected/77ae4727-de92-4d11-b951-9b2a734acc65-kube-api-access-95bmt\") pod \"redhat-operators-pnr9t\" (UID: \"77ae4727-de92-4d11-b951-9b2a734acc65\") " pod="openshift-marketplace/redhat-operators-pnr9t" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.994988 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77ae4727-de92-4d11-b951-9b2a734acc65-catalog-content\") pod \"redhat-operators-pnr9t\" (UID: \"77ae4727-de92-4d11-b951-9b2a734acc65\") " pod="openshift-marketplace/redhat-operators-pnr9t" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.995028 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77ae4727-de92-4d11-b951-9b2a734acc65-utilities\") pod \"redhat-operators-pnr9t\" (UID: \"77ae4727-de92-4d11-b951-9b2a734acc65\") " pod="openshift-marketplace/redhat-operators-pnr9t" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.995536 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77ae4727-de92-4d11-b951-9b2a734acc65-utilities\") pod \"redhat-operators-pnr9t\" (UID: \"77ae4727-de92-4d11-b951-9b2a734acc65\") " pod="openshift-marketplace/redhat-operators-pnr9t" Jan 31 03:53:17 crc kubenswrapper[4827]: I0131 03:53:17.995637 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77ae4727-de92-4d11-b951-9b2a734acc65-catalog-content\") pod \"redhat-operators-pnr9t\" (UID: \"77ae4727-de92-4d11-b951-9b2a734acc65\") " pod="openshift-marketplace/redhat-operators-pnr9t" Jan 31 03:53:18 crc kubenswrapper[4827]: I0131 03:53:18.019101 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95bmt\" (UniqueName: \"kubernetes.io/projected/77ae4727-de92-4d11-b951-9b2a734acc65-kube-api-access-95bmt\") pod \"redhat-operators-pnr9t\" (UID: \"77ae4727-de92-4d11-b951-9b2a734acc65\") " pod="openshift-marketplace/redhat-operators-pnr9t" Jan 31 03:53:18 crc kubenswrapper[4827]: I0131 03:53:18.120395 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pnr9t" Jan 31 03:53:18 crc kubenswrapper[4827]: I0131 03:53:18.512308 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pnr9t"] Jan 31 03:53:18 crc kubenswrapper[4827]: W0131 03:53:18.523157 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77ae4727_de92_4d11_b951_9b2a734acc65.slice/crio-0dd5fb3973d6c620bcf3528d64941cf99026c4260844763f4b2fddd50a2072ae WatchSource:0}: Error finding container 0dd5fb3973d6c620bcf3528d64941cf99026c4260844763f4b2fddd50a2072ae: Status 404 returned error can't find the container with id 0dd5fb3973d6c620bcf3528d64941cf99026c4260844763f4b2fddd50a2072ae Jan 31 03:53:18 crc kubenswrapper[4827]: I0131 03:53:18.758551 4827 generic.go:334] "Generic (PLEG): container finished" podID="c69e48aa-b820-4027-a322-cca18339d441" containerID="c7d63976c9c9acd00b00c3522e5299a0620f10abfebbccdb1ab754915e5efd4b" exitCode=0 Jan 31 03:53:18 crc kubenswrapper[4827]: I0131 03:53:18.758771 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7krl2" event={"ID":"c69e48aa-b820-4027-a322-cca18339d441","Type":"ContainerDied","Data":"c7d63976c9c9acd00b00c3522e5299a0620f10abfebbccdb1ab754915e5efd4b"} Jan 31 03:53:18 crc kubenswrapper[4827]: I0131 03:53:18.763541 4827 generic.go:334] "Generic (PLEG): container finished" podID="9405c6d0-837d-47f0-be6c-79518c22405d" containerID="53eeaba116961737953b34270a4cde746c68751963023628f5e695653ad821e8" exitCode=0 Jan 31 03:53:18 crc kubenswrapper[4827]: I0131 03:53:18.763608 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgpz2" event={"ID":"9405c6d0-837d-47f0-be6c-79518c22405d","Type":"ContainerDied","Data":"53eeaba116961737953b34270a4cde746c68751963023628f5e695653ad821e8"} Jan 31 03:53:18 crc kubenswrapper[4827]: I0131 03:53:18.763635 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgpz2" event={"ID":"9405c6d0-837d-47f0-be6c-79518c22405d","Type":"ContainerStarted","Data":"855b18b34bdbd43d8a3316f18f6c333c3473eb8c5a9c83011be1714d04e3d098"} Jan 31 03:53:18 crc kubenswrapper[4827]: I0131 03:53:18.771227 4827 generic.go:334] "Generic (PLEG): container finished" podID="77ae4727-de92-4d11-b951-9b2a734acc65" containerID="93ade24f4a64e840e7b64a76fe8c242bc68385c1e0f3929085360eb27a78455c" exitCode=0 Jan 31 03:53:18 crc kubenswrapper[4827]: I0131 03:53:18.771259 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnr9t" event={"ID":"77ae4727-de92-4d11-b951-9b2a734acc65","Type":"ContainerDied","Data":"93ade24f4a64e840e7b64a76fe8c242bc68385c1e0f3929085360eb27a78455c"} Jan 31 03:53:18 crc kubenswrapper[4827]: I0131 03:53:18.771279 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnr9t" event={"ID":"77ae4727-de92-4d11-b951-9b2a734acc65","Type":"ContainerStarted","Data":"0dd5fb3973d6c620bcf3528d64941cf99026c4260844763f4b2fddd50a2072ae"} Jan 31 03:53:19 crc kubenswrapper[4827]: I0131 03:53:19.574968 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ppx2z"] Jan 31 03:53:19 crc kubenswrapper[4827]: I0131 03:53:19.581506 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ppx2z" Jan 31 03:53:19 crc kubenswrapper[4827]: I0131 03:53:19.584505 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 03:53:19 crc kubenswrapper[4827]: I0131 03:53:19.591244 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ppx2z"] Jan 31 03:53:19 crc kubenswrapper[4827]: I0131 03:53:19.720910 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf906b5-5bd6-43ba-82b4-008d0b9f7b35-utilities\") pod \"community-operators-ppx2z\" (UID: \"4cf906b5-5bd6-43ba-82b4-008d0b9f7b35\") " pod="openshift-marketplace/community-operators-ppx2z" Jan 31 03:53:19 crc kubenswrapper[4827]: I0131 03:53:19.721018 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf906b5-5bd6-43ba-82b4-008d0b9f7b35-catalog-content\") pod \"community-operators-ppx2z\" (UID: \"4cf906b5-5bd6-43ba-82b4-008d0b9f7b35\") " pod="openshift-marketplace/community-operators-ppx2z" Jan 31 03:53:19 crc kubenswrapper[4827]: I0131 03:53:19.721072 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4z72\" (UniqueName: \"kubernetes.io/projected/4cf906b5-5bd6-43ba-82b4-008d0b9f7b35-kube-api-access-n4z72\") pod \"community-operators-ppx2z\" (UID: \"4cf906b5-5bd6-43ba-82b4-008d0b9f7b35\") " pod="openshift-marketplace/community-operators-ppx2z" Jan 31 03:53:19 crc kubenswrapper[4827]: I0131 03:53:19.790608 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7krl2" event={"ID":"c69e48aa-b820-4027-a322-cca18339d441","Type":"ContainerStarted","Data":"695894ed778b91528d1c5a08dc133882bc388a190e2514a0449c11591ecdcf87"} Jan 31 03:53:19 crc kubenswrapper[4827]: I0131 03:53:19.792581 4827 generic.go:334] "Generic (PLEG): container finished" podID="9405c6d0-837d-47f0-be6c-79518c22405d" containerID="3421d498abe75b99f7a4a0ecadad41c249191ccdc07ff3a4baab92a1a2c31042" exitCode=0 Jan 31 03:53:19 crc kubenswrapper[4827]: I0131 03:53:19.792634 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgpz2" event={"ID":"9405c6d0-837d-47f0-be6c-79518c22405d","Type":"ContainerDied","Data":"3421d498abe75b99f7a4a0ecadad41c249191ccdc07ff3a4baab92a1a2c31042"} Jan 31 03:53:19 crc kubenswrapper[4827]: I0131 03:53:19.795929 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnr9t" event={"ID":"77ae4727-de92-4d11-b951-9b2a734acc65","Type":"ContainerStarted","Data":"3b5e4e58db32f8a244b5ac027f5208d7c34d11859d21cd0e149b0f96eb76ac2f"} Jan 31 03:53:19 crc kubenswrapper[4827]: I0131 03:53:19.812620 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7krl2" podStartSLOduration=2.29787803 podStartE2EDuration="4.81260114s" podCreationTimestamp="2026-01-31 03:53:15 +0000 UTC" firstStartedPulling="2026-01-31 03:53:16.745533044 +0000 UTC m=+389.432613543" lastFinishedPulling="2026-01-31 03:53:19.260256194 +0000 UTC m=+391.947336653" observedRunningTime="2026-01-31 03:53:19.808369443 +0000 UTC m=+392.495449882" watchObservedRunningTime="2026-01-31 03:53:19.81260114 +0000 UTC m=+392.499681589" Jan 31 03:53:19 crc kubenswrapper[4827]: I0131 03:53:19.823052 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4z72\" (UniqueName: \"kubernetes.io/projected/4cf906b5-5bd6-43ba-82b4-008d0b9f7b35-kube-api-access-n4z72\") pod \"community-operators-ppx2z\" (UID: \"4cf906b5-5bd6-43ba-82b4-008d0b9f7b35\") " pod="openshift-marketplace/community-operators-ppx2z" Jan 31 03:53:19 crc kubenswrapper[4827]: I0131 03:53:19.823141 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf906b5-5bd6-43ba-82b4-008d0b9f7b35-utilities\") pod \"community-operators-ppx2z\" (UID: \"4cf906b5-5bd6-43ba-82b4-008d0b9f7b35\") " pod="openshift-marketplace/community-operators-ppx2z" Jan 31 03:53:19 crc kubenswrapper[4827]: I0131 03:53:19.823174 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf906b5-5bd6-43ba-82b4-008d0b9f7b35-catalog-content\") pod \"community-operators-ppx2z\" (UID: \"4cf906b5-5bd6-43ba-82b4-008d0b9f7b35\") " pod="openshift-marketplace/community-operators-ppx2z" Jan 31 03:53:19 crc kubenswrapper[4827]: I0131 03:53:19.823603 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf906b5-5bd6-43ba-82b4-008d0b9f7b35-catalog-content\") pod \"community-operators-ppx2z\" (UID: \"4cf906b5-5bd6-43ba-82b4-008d0b9f7b35\") " pod="openshift-marketplace/community-operators-ppx2z" Jan 31 03:53:19 crc kubenswrapper[4827]: I0131 03:53:19.823725 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf906b5-5bd6-43ba-82b4-008d0b9f7b35-utilities\") pod \"community-operators-ppx2z\" (UID: \"4cf906b5-5bd6-43ba-82b4-008d0b9f7b35\") " pod="openshift-marketplace/community-operators-ppx2z" Jan 31 03:53:19 crc kubenswrapper[4827]: I0131 03:53:19.844057 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4z72\" (UniqueName: \"kubernetes.io/projected/4cf906b5-5bd6-43ba-82b4-008d0b9f7b35-kube-api-access-n4z72\") pod \"community-operators-ppx2z\" (UID: \"4cf906b5-5bd6-43ba-82b4-008d0b9f7b35\") " pod="openshift-marketplace/community-operators-ppx2z" Jan 31 03:53:19 crc kubenswrapper[4827]: I0131 03:53:19.908409 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ppx2z" Jan 31 03:53:20 crc kubenswrapper[4827]: I0131 03:53:20.374122 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ppx2z"] Jan 31 03:53:20 crc kubenswrapper[4827]: I0131 03:53:20.802987 4827 generic.go:334] "Generic (PLEG): container finished" podID="77ae4727-de92-4d11-b951-9b2a734acc65" containerID="3b5e4e58db32f8a244b5ac027f5208d7c34d11859d21cd0e149b0f96eb76ac2f" exitCode=0 Jan 31 03:53:20 crc kubenswrapper[4827]: I0131 03:53:20.803186 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnr9t" event={"ID":"77ae4727-de92-4d11-b951-9b2a734acc65","Type":"ContainerDied","Data":"3b5e4e58db32f8a244b5ac027f5208d7c34d11859d21cd0e149b0f96eb76ac2f"} Jan 31 03:53:20 crc kubenswrapper[4827]: I0131 03:53:20.805278 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppx2z" event={"ID":"4cf906b5-5bd6-43ba-82b4-008d0b9f7b35","Type":"ContainerStarted","Data":"10fc92015305e743c3faf6b8269873b4c9e7784a0fce86de23488820305c3756"} Jan 31 03:53:21 crc kubenswrapper[4827]: I0131 03:53:21.814122 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgpz2" event={"ID":"9405c6d0-837d-47f0-be6c-79518c22405d","Type":"ContainerStarted","Data":"234ac0a7c1df282ce4936a99e83b5d4bef5f9f20a5c8a362e27900f90a9b54b6"} Jan 31 03:53:21 crc kubenswrapper[4827]: I0131 03:53:21.816461 4827 generic.go:334] "Generic (PLEG): container finished" podID="4cf906b5-5bd6-43ba-82b4-008d0b9f7b35" containerID="ba12cc34a90a3bfa8f51352ff20b047143708989568664839fc7c77b33ed6985" exitCode=0 Jan 31 03:53:21 crc kubenswrapper[4827]: I0131 03:53:21.816521 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppx2z" event={"ID":"4cf906b5-5bd6-43ba-82b4-008d0b9f7b35","Type":"ContainerDied","Data":"ba12cc34a90a3bfa8f51352ff20b047143708989568664839fc7c77b33ed6985"} Jan 31 03:53:21 crc kubenswrapper[4827]: I0131 03:53:21.862622 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sgpz2" podStartSLOduration=3.001777952 podStartE2EDuration="4.862602303s" podCreationTimestamp="2026-01-31 03:53:17 +0000 UTC" firstStartedPulling="2026-01-31 03:53:18.770602388 +0000 UTC m=+391.457682837" lastFinishedPulling="2026-01-31 03:53:20.631426739 +0000 UTC m=+393.318507188" observedRunningTime="2026-01-31 03:53:21.841382874 +0000 UTC m=+394.528463343" watchObservedRunningTime="2026-01-31 03:53:21.862602303 +0000 UTC m=+394.549682762" Jan 31 03:53:22 crc kubenswrapper[4827]: I0131 03:53:22.823724 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnr9t" event={"ID":"77ae4727-de92-4d11-b951-9b2a734acc65","Type":"ContainerStarted","Data":"29f7677bd88f2b021d401cf6487cb4f1391deecc0762696ca7e29ace0c05a494"} Jan 31 03:53:22 crc kubenswrapper[4827]: I0131 03:53:22.849586 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pnr9t" podStartSLOduration=3.106766752 podStartE2EDuration="5.849566812s" podCreationTimestamp="2026-01-31 03:53:17 +0000 UTC" firstStartedPulling="2026-01-31 03:53:18.775068181 +0000 UTC m=+391.462148670" lastFinishedPulling="2026-01-31 03:53:21.517868281 +0000 UTC m=+394.204948730" observedRunningTime="2026-01-31 03:53:22.848316563 +0000 UTC m=+395.535397022" watchObservedRunningTime="2026-01-31 03:53:22.849566812 +0000 UTC m=+395.536647261" Jan 31 03:53:23 crc kubenswrapper[4827]: I0131 03:53:23.843100 4827 generic.go:334] "Generic (PLEG): container finished" podID="4cf906b5-5bd6-43ba-82b4-008d0b9f7b35" containerID="843b562b6b06399e97404de7bb1d4d6027d1be04bb97b07c7c2486912cd54ec9" exitCode=0 Jan 31 03:53:23 crc kubenswrapper[4827]: I0131 03:53:23.843281 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppx2z" event={"ID":"4cf906b5-5bd6-43ba-82b4-008d0b9f7b35","Type":"ContainerDied","Data":"843b562b6b06399e97404de7bb1d4d6027d1be04bb97b07c7c2486912cd54ec9"} Jan 31 03:53:24 crc kubenswrapper[4827]: I0131 03:53:24.853896 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppx2z" event={"ID":"4cf906b5-5bd6-43ba-82b4-008d0b9f7b35","Type":"ContainerStarted","Data":"499ad22aee283ad690e5b3d0ea7d163ea0cc50ab475e1cb0f8c4ff8dd10d519f"} Jan 31 03:53:24 crc kubenswrapper[4827]: I0131 03:53:24.882333 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ppx2z" podStartSLOduration=3.4453584360000002 podStartE2EDuration="5.882304976s" podCreationTimestamp="2026-01-31 03:53:19 +0000 UTC" firstStartedPulling="2026-01-31 03:53:21.818248487 +0000 UTC m=+394.505328946" lastFinishedPulling="2026-01-31 03:53:24.255194997 +0000 UTC m=+396.942275486" observedRunningTime="2026-01-31 03:53:24.877763395 +0000 UTC m=+397.564843894" watchObservedRunningTime="2026-01-31 03:53:24.882304976 +0000 UTC m=+397.569385415" Jan 31 03:53:25 crc kubenswrapper[4827]: I0131 03:53:25.691550 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7krl2" Jan 31 03:53:25 crc kubenswrapper[4827]: I0131 03:53:25.692107 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7krl2" Jan 31 03:53:25 crc kubenswrapper[4827]: I0131 03:53:25.736251 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7krl2" Jan 31 03:53:25 crc kubenswrapper[4827]: I0131 03:53:25.901395 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7krl2" Jan 31 03:53:27 crc kubenswrapper[4827]: I0131 03:53:27.503969 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sgpz2" Jan 31 03:53:27 crc kubenswrapper[4827]: I0131 03:53:27.504465 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sgpz2" Jan 31 03:53:27 crc kubenswrapper[4827]: I0131 03:53:27.565183 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sgpz2" Jan 31 03:53:27 crc kubenswrapper[4827]: I0131 03:53:27.937419 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sgpz2" Jan 31 03:53:28 crc kubenswrapper[4827]: I0131 03:53:28.121820 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pnr9t" Jan 31 03:53:28 crc kubenswrapper[4827]: I0131 03:53:28.121868 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pnr9t" Jan 31 03:53:28 crc kubenswrapper[4827]: I0131 03:53:28.187291 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pnr9t" Jan 31 03:53:28 crc kubenswrapper[4827]: I0131 03:53:28.974089 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pnr9t" Jan 31 03:53:29 crc kubenswrapper[4827]: I0131 03:53:29.909288 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ppx2z" Jan 31 03:53:29 crc kubenswrapper[4827]: I0131 03:53:29.910436 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ppx2z" Jan 31 03:53:29 crc kubenswrapper[4827]: I0131 03:53:29.959831 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ppx2z" Jan 31 03:53:30 crc kubenswrapper[4827]: I0131 03:53:30.948505 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ppx2z" Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.085138 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" podUID="04eac770-8ff7-453b-a1da-b028636b909c" containerName="registry" containerID="cri-o://9e72f39420d4238f46b07310e10fa60284a81c8bc89f943ac5a01b34249092c9" gracePeriod=30 Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.453365 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.535113 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04eac770-8ff7-453b-a1da-b028636b909c-trusted-ca\") pod \"04eac770-8ff7-453b-a1da-b028636b909c\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.535166 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/04eac770-8ff7-453b-a1da-b028636b909c-registry-tls\") pod \"04eac770-8ff7-453b-a1da-b028636b909c\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.535193 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/04eac770-8ff7-453b-a1da-b028636b909c-registry-certificates\") pod \"04eac770-8ff7-453b-a1da-b028636b909c\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.535254 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/04eac770-8ff7-453b-a1da-b028636b909c-installation-pull-secrets\") pod \"04eac770-8ff7-453b-a1da-b028636b909c\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.535286 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04eac770-8ff7-453b-a1da-b028636b909c-bound-sa-token\") pod \"04eac770-8ff7-453b-a1da-b028636b909c\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.535311 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxz9b\" (UniqueName: \"kubernetes.io/projected/04eac770-8ff7-453b-a1da-b028636b909c-kube-api-access-wxz9b\") pod \"04eac770-8ff7-453b-a1da-b028636b909c\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.535561 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"04eac770-8ff7-453b-a1da-b028636b909c\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.535648 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/04eac770-8ff7-453b-a1da-b028636b909c-ca-trust-extracted\") pod \"04eac770-8ff7-453b-a1da-b028636b909c\" (UID: \"04eac770-8ff7-453b-a1da-b028636b909c\") " Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.536795 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04eac770-8ff7-453b-a1da-b028636b909c-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "04eac770-8ff7-453b-a1da-b028636b909c" (UID: "04eac770-8ff7-453b-a1da-b028636b909c"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.536916 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04eac770-8ff7-453b-a1da-b028636b909c-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "04eac770-8ff7-453b-a1da-b028636b909c" (UID: "04eac770-8ff7-453b-a1da-b028636b909c"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.543339 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04eac770-8ff7-453b-a1da-b028636b909c-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "04eac770-8ff7-453b-a1da-b028636b909c" (UID: "04eac770-8ff7-453b-a1da-b028636b909c"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.544723 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04eac770-8ff7-453b-a1da-b028636b909c-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "04eac770-8ff7-453b-a1da-b028636b909c" (UID: "04eac770-8ff7-453b-a1da-b028636b909c"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.546120 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04eac770-8ff7-453b-a1da-b028636b909c-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "04eac770-8ff7-453b-a1da-b028636b909c" (UID: "04eac770-8ff7-453b-a1da-b028636b909c"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.546136 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04eac770-8ff7-453b-a1da-b028636b909c-kube-api-access-wxz9b" (OuterVolumeSpecName: "kube-api-access-wxz9b") pod "04eac770-8ff7-453b-a1da-b028636b909c" (UID: "04eac770-8ff7-453b-a1da-b028636b909c"). InnerVolumeSpecName "kube-api-access-wxz9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.549219 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "04eac770-8ff7-453b-a1da-b028636b909c" (UID: "04eac770-8ff7-453b-a1da-b028636b909c"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.557669 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04eac770-8ff7-453b-a1da-b028636b909c-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "04eac770-8ff7-453b-a1da-b028636b909c" (UID: "04eac770-8ff7-453b-a1da-b028636b909c"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.637377 4827 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/04eac770-8ff7-453b-a1da-b028636b909c-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.637426 4827 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04eac770-8ff7-453b-a1da-b028636b909c-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.637448 4827 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/04eac770-8ff7-453b-a1da-b028636b909c-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.637472 4827 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/04eac770-8ff7-453b-a1da-b028636b909c-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.637493 4827 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/04eac770-8ff7-453b-a1da-b028636b909c-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.637511 4827 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04eac770-8ff7-453b-a1da-b028636b909c-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.637577 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxz9b\" (UniqueName: \"kubernetes.io/projected/04eac770-8ff7-453b-a1da-b028636b909c-kube-api-access-wxz9b\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.967378 4827 generic.go:334] "Generic (PLEG): container finished" podID="04eac770-8ff7-453b-a1da-b028636b909c" containerID="9e72f39420d4238f46b07310e10fa60284a81c8bc89f943ac5a01b34249092c9" exitCode=0 Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.967491 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.967475 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" event={"ID":"04eac770-8ff7-453b-a1da-b028636b909c","Type":"ContainerDied","Data":"9e72f39420d4238f46b07310e10fa60284a81c8bc89f943ac5a01b34249092c9"} Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.967590 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-s7psb" event={"ID":"04eac770-8ff7-453b-a1da-b028636b909c","Type":"ContainerDied","Data":"bbc795f00e60e11f74cfe9af05a677d8fe31b784b77d37b8acdfcf976338658b"} Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.967632 4827 scope.go:117] "RemoveContainer" containerID="9e72f39420d4238f46b07310e10fa60284a81c8bc89f943ac5a01b34249092c9" Jan 31 03:53:40 crc kubenswrapper[4827]: I0131 03:53:40.999911 4827 scope.go:117] "RemoveContainer" containerID="9e72f39420d4238f46b07310e10fa60284a81c8bc89f943ac5a01b34249092c9" Jan 31 03:53:41 crc kubenswrapper[4827]: E0131 03:53:41.000563 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e72f39420d4238f46b07310e10fa60284a81c8bc89f943ac5a01b34249092c9\": container with ID starting with 9e72f39420d4238f46b07310e10fa60284a81c8bc89f943ac5a01b34249092c9 not found: ID does not exist" containerID="9e72f39420d4238f46b07310e10fa60284a81c8bc89f943ac5a01b34249092c9" Jan 31 03:53:41 crc kubenswrapper[4827]: I0131 03:53:41.000641 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e72f39420d4238f46b07310e10fa60284a81c8bc89f943ac5a01b34249092c9"} err="failed to get container status \"9e72f39420d4238f46b07310e10fa60284a81c8bc89f943ac5a01b34249092c9\": rpc error: code = NotFound desc = could not find container \"9e72f39420d4238f46b07310e10fa60284a81c8bc89f943ac5a01b34249092c9\": container with ID starting with 9e72f39420d4238f46b07310e10fa60284a81c8bc89f943ac5a01b34249092c9 not found: ID does not exist" Jan 31 03:53:41 crc kubenswrapper[4827]: I0131 03:53:41.025275 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s7psb"] Jan 31 03:53:41 crc kubenswrapper[4827]: I0131 03:53:41.032730 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s7psb"] Jan 31 03:53:42 crc kubenswrapper[4827]: I0131 03:53:42.121942 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04eac770-8ff7-453b-a1da-b028636b909c" path="/var/lib/kubelet/pods/04eac770-8ff7-453b-a1da-b028636b909c/volumes" Jan 31 03:53:47 crc kubenswrapper[4827]: I0131 03:53:47.371726 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 03:53:47 crc kubenswrapper[4827]: I0131 03:53:47.372229 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 03:53:47 crc kubenswrapper[4827]: I0131 03:53:47.372293 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 03:53:47 crc kubenswrapper[4827]: I0131 03:53:47.373157 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e027f81b11efb72a8912d312a7f93e437919ca268f3acdeab9e714aa0b8ebaf"} pod="openshift-machine-config-operator/machine-config-daemon-jxh94" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 03:53:47 crc kubenswrapper[4827]: I0131 03:53:47.373275 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" containerID="cri-o://7e027f81b11efb72a8912d312a7f93e437919ca268f3acdeab9e714aa0b8ebaf" gracePeriod=600 Jan 31 03:53:48 crc kubenswrapper[4827]: I0131 03:53:48.019862 4827 generic.go:334] "Generic (PLEG): container finished" podID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerID="7e027f81b11efb72a8912d312a7f93e437919ca268f3acdeab9e714aa0b8ebaf" exitCode=0 Jan 31 03:53:48 crc kubenswrapper[4827]: I0131 03:53:48.019969 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerDied","Data":"7e027f81b11efb72a8912d312a7f93e437919ca268f3acdeab9e714aa0b8ebaf"} Jan 31 03:53:48 crc kubenswrapper[4827]: I0131 03:53:48.021075 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"ff1002b4326b60d6728a9f4939c5459cec6a294d7d2af6d13663a334c8cece05"} Jan 31 03:53:48 crc kubenswrapper[4827]: I0131 03:53:48.021172 4827 scope.go:117] "RemoveContainer" containerID="00b8856a5712a7470f8bc58fd07644c58113fde5a273faa89586f36381942c50" Jan 31 03:55:47 crc kubenswrapper[4827]: I0131 03:55:47.371012 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 03:55:47 crc kubenswrapper[4827]: I0131 03:55:47.371731 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 03:55:54 crc kubenswrapper[4827]: I0131 03:55:54.289630 4827 scope.go:117] "RemoveContainer" containerID="d306765b467d8f21b503a07a80398acb56f84225397694be9390048043d6fca9" Jan 31 03:56:17 crc kubenswrapper[4827]: I0131 03:56:17.372344 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 03:56:17 crc kubenswrapper[4827]: I0131 03:56:17.373509 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 03:56:47 crc kubenswrapper[4827]: I0131 03:56:47.371667 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 03:56:47 crc kubenswrapper[4827]: I0131 03:56:47.372389 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 03:56:47 crc kubenswrapper[4827]: I0131 03:56:47.372455 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 03:56:47 crc kubenswrapper[4827]: I0131 03:56:47.373281 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ff1002b4326b60d6728a9f4939c5459cec6a294d7d2af6d13663a334c8cece05"} pod="openshift-machine-config-operator/machine-config-daemon-jxh94" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 03:56:47 crc kubenswrapper[4827]: I0131 03:56:47.373378 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" containerID="cri-o://ff1002b4326b60d6728a9f4939c5459cec6a294d7d2af6d13663a334c8cece05" gracePeriod=600 Jan 31 03:56:48 crc kubenswrapper[4827]: I0131 03:56:48.297722 4827 generic.go:334] "Generic (PLEG): container finished" podID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerID="ff1002b4326b60d6728a9f4939c5459cec6a294d7d2af6d13663a334c8cece05" exitCode=0 Jan 31 03:56:48 crc kubenswrapper[4827]: I0131 03:56:48.297778 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerDied","Data":"ff1002b4326b60d6728a9f4939c5459cec6a294d7d2af6d13663a334c8cece05"} Jan 31 03:56:48 crc kubenswrapper[4827]: I0131 03:56:48.298175 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"bfaefcdaba61a9df67ef38340b2b8e90d41a85b4a9bee50aad5651159c3ae7f7"} Jan 31 03:56:48 crc kubenswrapper[4827]: I0131 03:56:48.298196 4827 scope.go:117] "RemoveContainer" containerID="7e027f81b11efb72a8912d312a7f93e437919ca268f3acdeab9e714aa0b8ebaf" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.060196 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-bdz52"] Jan 31 03:57:47 crc kubenswrapper[4827]: E0131 03:57:47.061163 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04eac770-8ff7-453b-a1da-b028636b909c" containerName="registry" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.061181 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="04eac770-8ff7-453b-a1da-b028636b909c" containerName="registry" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.061294 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="04eac770-8ff7-453b-a1da-b028636b909c" containerName="registry" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.061686 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bdz52" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.063557 4827 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-b9lnv" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.063864 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.067892 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-9hxtf"] Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.068774 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-9hxtf" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.070127 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.070340 4827 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-rsdl2" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.075280 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-bdz52"] Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.089247 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-lg7rt"] Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.090166 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-lg7rt" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.092202 4827 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-nmcvq" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.100804 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-9hxtf"] Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.107030 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-lg7rt"] Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.208751 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2khnh\" (UniqueName: \"kubernetes.io/projected/8f78df48-021e-4d81-afac-ae4dc1b7f932-kube-api-access-2khnh\") pod \"cert-manager-webhook-687f57d79b-lg7rt\" (UID: \"8f78df48-021e-4d81-afac-ae4dc1b7f932\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lg7rt" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.208812 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jzww\" (UniqueName: \"kubernetes.io/projected/3f73fff6-a495-43d2-b063-ed9792fa2526-kube-api-access-9jzww\") pod \"cert-manager-cainjector-cf98fcc89-bdz52\" (UID: \"3f73fff6-a495-43d2-b063-ed9792fa2526\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-bdz52" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.208948 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkt9z\" (UniqueName: \"kubernetes.io/projected/bef48f94-220d-4244-8412-0fbb3c3a08a6-kube-api-access-dkt9z\") pod \"cert-manager-858654f9db-9hxtf\" (UID: \"bef48f94-220d-4244-8412-0fbb3c3a08a6\") " pod="cert-manager/cert-manager-858654f9db-9hxtf" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.310165 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkt9z\" (UniqueName: \"kubernetes.io/projected/bef48f94-220d-4244-8412-0fbb3c3a08a6-kube-api-access-dkt9z\") pod \"cert-manager-858654f9db-9hxtf\" (UID: \"bef48f94-220d-4244-8412-0fbb3c3a08a6\") " pod="cert-manager/cert-manager-858654f9db-9hxtf" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.310218 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2khnh\" (UniqueName: \"kubernetes.io/projected/8f78df48-021e-4d81-afac-ae4dc1b7f932-kube-api-access-2khnh\") pod \"cert-manager-webhook-687f57d79b-lg7rt\" (UID: \"8f78df48-021e-4d81-afac-ae4dc1b7f932\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lg7rt" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.310252 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jzww\" (UniqueName: \"kubernetes.io/projected/3f73fff6-a495-43d2-b063-ed9792fa2526-kube-api-access-9jzww\") pod \"cert-manager-cainjector-cf98fcc89-bdz52\" (UID: \"3f73fff6-a495-43d2-b063-ed9792fa2526\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-bdz52" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.342049 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2khnh\" (UniqueName: \"kubernetes.io/projected/8f78df48-021e-4d81-afac-ae4dc1b7f932-kube-api-access-2khnh\") pod \"cert-manager-webhook-687f57d79b-lg7rt\" (UID: \"8f78df48-021e-4d81-afac-ae4dc1b7f932\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lg7rt" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.343289 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkt9z\" (UniqueName: \"kubernetes.io/projected/bef48f94-220d-4244-8412-0fbb3c3a08a6-kube-api-access-dkt9z\") pod \"cert-manager-858654f9db-9hxtf\" (UID: \"bef48f94-220d-4244-8412-0fbb3c3a08a6\") " pod="cert-manager/cert-manager-858654f9db-9hxtf" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.346481 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jzww\" (UniqueName: \"kubernetes.io/projected/3f73fff6-a495-43d2-b063-ed9792fa2526-kube-api-access-9jzww\") pod \"cert-manager-cainjector-cf98fcc89-bdz52\" (UID: \"3f73fff6-a495-43d2-b063-ed9792fa2526\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-bdz52" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.388128 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bdz52" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.399458 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-9hxtf" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.411174 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-lg7rt" Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.631462 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-bdz52"] Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.642166 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.694037 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bdz52" event={"ID":"3f73fff6-a495-43d2-b063-ed9792fa2526","Type":"ContainerStarted","Data":"7a5d4dbc4eee5103c95d10e590b0fa984e6fd22276c3facc6c3b1436a599e25b"} Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.925918 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-9hxtf"] Jan 31 03:57:47 crc kubenswrapper[4827]: W0131 03:57:47.927663 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbef48f94_220d_4244_8412_0fbb3c3a08a6.slice/crio-d95f0c2c8f5b39e7b842cc5554ed7544b90992470174ec80876a5dabdec2a0f3 WatchSource:0}: Error finding container d95f0c2c8f5b39e7b842cc5554ed7544b90992470174ec80876a5dabdec2a0f3: Status 404 returned error can't find the container with id d95f0c2c8f5b39e7b842cc5554ed7544b90992470174ec80876a5dabdec2a0f3 Jan 31 03:57:47 crc kubenswrapper[4827]: W0131 03:57:47.931812 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f78df48_021e_4d81_afac_ae4dc1b7f932.slice/crio-048ddacd7891fb33ba3ca04e13511d44002a35c899aa2f459ef40f786650a9b6 WatchSource:0}: Error finding container 048ddacd7891fb33ba3ca04e13511d44002a35c899aa2f459ef40f786650a9b6: Status 404 returned error can't find the container with id 048ddacd7891fb33ba3ca04e13511d44002a35c899aa2f459ef40f786650a9b6 Jan 31 03:57:47 crc kubenswrapper[4827]: I0131 03:57:47.932346 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-lg7rt"] Jan 31 03:57:48 crc kubenswrapper[4827]: I0131 03:57:48.703915 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-9hxtf" event={"ID":"bef48f94-220d-4244-8412-0fbb3c3a08a6","Type":"ContainerStarted","Data":"d95f0c2c8f5b39e7b842cc5554ed7544b90992470174ec80876a5dabdec2a0f3"} Jan 31 03:57:48 crc kubenswrapper[4827]: I0131 03:57:48.705275 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-lg7rt" event={"ID":"8f78df48-021e-4d81-afac-ae4dc1b7f932","Type":"ContainerStarted","Data":"048ddacd7891fb33ba3ca04e13511d44002a35c899aa2f459ef40f786650a9b6"} Jan 31 03:57:50 crc kubenswrapper[4827]: I0131 03:57:50.717190 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bdz52" event={"ID":"3f73fff6-a495-43d2-b063-ed9792fa2526","Type":"ContainerStarted","Data":"191e11d750780c4458a27a806e23780077cc6f9e26abc430ca5158443de198ba"} Jan 31 03:57:50 crc kubenswrapper[4827]: I0131 03:57:50.740830 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bdz52" podStartSLOduration=1.560189924 podStartE2EDuration="3.740804569s" podCreationTimestamp="2026-01-31 03:57:47 +0000 UTC" firstStartedPulling="2026-01-31 03:57:47.641949312 +0000 UTC m=+660.329029761" lastFinishedPulling="2026-01-31 03:57:49.822563957 +0000 UTC m=+662.509644406" observedRunningTime="2026-01-31 03:57:50.733543561 +0000 UTC m=+663.420624030" watchObservedRunningTime="2026-01-31 03:57:50.740804569 +0000 UTC m=+663.427885018" Jan 31 03:57:52 crc kubenswrapper[4827]: I0131 03:57:52.734537 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-9hxtf" event={"ID":"bef48f94-220d-4244-8412-0fbb3c3a08a6","Type":"ContainerStarted","Data":"3e54af782cca4fbc563c53c1cd6780f32d85528540e20f07f31e1c2141986730"} Jan 31 03:57:52 crc kubenswrapper[4827]: I0131 03:57:52.736333 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-lg7rt" event={"ID":"8f78df48-021e-4d81-afac-ae4dc1b7f932","Type":"ContainerStarted","Data":"0bf3d14dbc941add1f2a21f1a511f5a71d0e656444a43f854c3e81e9bf11c9bd"} Jan 31 03:57:52 crc kubenswrapper[4827]: I0131 03:57:52.736513 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-lg7rt" Jan 31 03:57:53 crc kubenswrapper[4827]: I0131 03:57:53.011699 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-9hxtf" podStartSLOduration=2.3375774480000002 podStartE2EDuration="6.011681986s" podCreationTimestamp="2026-01-31 03:57:47 +0000 UTC" firstStartedPulling="2026-01-31 03:57:47.930298559 +0000 UTC m=+660.617379048" lastFinishedPulling="2026-01-31 03:57:51.604403127 +0000 UTC m=+664.291483586" observedRunningTime="2026-01-31 03:57:53.010718088 +0000 UTC m=+665.697798537" watchObservedRunningTime="2026-01-31 03:57:53.011681986 +0000 UTC m=+665.698762435" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.122787 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-lg7rt" podStartSLOduration=6.539781549 podStartE2EDuration="10.122758382s" podCreationTimestamp="2026-01-31 03:57:47 +0000 UTC" firstStartedPulling="2026-01-31 03:57:47.938435541 +0000 UTC m=+660.625515990" lastFinishedPulling="2026-01-31 03:57:51.521412354 +0000 UTC m=+664.208492823" observedRunningTime="2026-01-31 03:57:53.040425177 +0000 UTC m=+665.727505656" watchObservedRunningTime="2026-01-31 03:57:57.122758382 +0000 UTC m=+669.809838861" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.124410 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hj2zw"] Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.125010 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovn-controller" containerID="cri-o://ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f" gracePeriod=30 Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.125082 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="northd" containerID="cri-o://32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48" gracePeriod=30 Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.125095 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="sbdb" containerID="cri-o://2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d" gracePeriod=30 Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.125167 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovn-acl-logging" containerID="cri-o://5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44" gracePeriod=30 Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.125222 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753" gracePeriod=30 Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.125268 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="kube-rbac-proxy-node" containerID="cri-o://bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918" gracePeriod=30 Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.125253 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="nbdb" containerID="cri-o://70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6" gracePeriod=30 Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.168995 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovnkube-controller" containerID="cri-o://cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c" gracePeriod=30 Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.416187 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-lg7rt" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.463039 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hj2zw_da9e7773-a24b-4e8d-b479-97e2594db0d4/ovnkube-controller/3.log" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.465863 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hj2zw_da9e7773-a24b-4e8d-b479-97e2594db0d4/ovn-acl-logging/0.log" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.466625 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hj2zw_da9e7773-a24b-4e8d-b479-97e2594db0d4/ovn-controller/0.log" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.467141 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.478976 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-systemd-units\") pod \"da9e7773-a24b-4e8d-b479-97e2594db0d4\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.479027 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/da9e7773-a24b-4e8d-b479-97e2594db0d4-ovnkube-script-lib\") pod \"da9e7773-a24b-4e8d-b479-97e2594db0d4\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.479065 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-slash\") pod \"da9e7773-a24b-4e8d-b479-97e2594db0d4\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.479089 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-cni-bin\") pod \"da9e7773-a24b-4e8d-b479-97e2594db0d4\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.479118 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt4z5\" (UniqueName: \"kubernetes.io/projected/da9e7773-a24b-4e8d-b479-97e2594db0d4-kube-api-access-mt4z5\") pod \"da9e7773-a24b-4e8d-b479-97e2594db0d4\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480054 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-log-socket\") pod \"da9e7773-a24b-4e8d-b479-97e2594db0d4\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480092 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-kubelet\") pod \"da9e7773-a24b-4e8d-b479-97e2594db0d4\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480135 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-run-ovn\") pod \"da9e7773-a24b-4e8d-b479-97e2594db0d4\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.479118 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "da9e7773-a24b-4e8d-b479-97e2594db0d4" (UID: "da9e7773-a24b-4e8d-b479-97e2594db0d4"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.479155 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "da9e7773-a24b-4e8d-b479-97e2594db0d4" (UID: "da9e7773-a24b-4e8d-b479-97e2594db0d4"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.479241 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-slash" (OuterVolumeSpecName: "host-slash") pod "da9e7773-a24b-4e8d-b479-97e2594db0d4" (UID: "da9e7773-a24b-4e8d-b479-97e2594db0d4"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.479645 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da9e7773-a24b-4e8d-b479-97e2594db0d4-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "da9e7773-a24b-4e8d-b479-97e2594db0d4" (UID: "da9e7773-a24b-4e8d-b479-97e2594db0d4"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480165 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-log-socket" (OuterVolumeSpecName: "log-socket") pod "da9e7773-a24b-4e8d-b479-97e2594db0d4" (UID: "da9e7773-a24b-4e8d-b479-97e2594db0d4"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480196 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "da9e7773-a24b-4e8d-b479-97e2594db0d4" (UID: "da9e7773-a24b-4e8d-b479-97e2594db0d4"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480216 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "da9e7773-a24b-4e8d-b479-97e2594db0d4" (UID: "da9e7773-a24b-4e8d-b479-97e2594db0d4"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480169 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-var-lib-openvswitch\") pod \"da9e7773-a24b-4e8d-b479-97e2594db0d4\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480272 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-etc-openvswitch\") pod \"da9e7773-a24b-4e8d-b479-97e2594db0d4\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480233 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "da9e7773-a24b-4e8d-b479-97e2594db0d4" (UID: "da9e7773-a24b-4e8d-b479-97e2594db0d4"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480311 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-cni-netd\") pod \"da9e7773-a24b-4e8d-b479-97e2594db0d4\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480336 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "da9e7773-a24b-4e8d-b479-97e2594db0d4" (UID: "da9e7773-a24b-4e8d-b479-97e2594db0d4"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480344 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da9e7773-a24b-4e8d-b479-97e2594db0d4-ovn-node-metrics-cert\") pod \"da9e7773-a24b-4e8d-b479-97e2594db0d4\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480363 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "da9e7773-a24b-4e8d-b479-97e2594db0d4" (UID: "da9e7773-a24b-4e8d-b479-97e2594db0d4"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480370 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"da9e7773-a24b-4e8d-b479-97e2594db0d4\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480399 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-run-ovn-kubernetes\") pod \"da9e7773-a24b-4e8d-b479-97e2594db0d4\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480427 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-node-log\") pod \"da9e7773-a24b-4e8d-b479-97e2594db0d4\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480451 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-run-netns\") pod \"da9e7773-a24b-4e8d-b479-97e2594db0d4\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480479 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da9e7773-a24b-4e8d-b479-97e2594db0d4-env-overrides\") pod \"da9e7773-a24b-4e8d-b479-97e2594db0d4\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480517 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/da9e7773-a24b-4e8d-b479-97e2594db0d4-ovnkube-config\") pod \"da9e7773-a24b-4e8d-b479-97e2594db0d4\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480539 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-run-systemd\") pod \"da9e7773-a24b-4e8d-b479-97e2594db0d4\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480576 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-run-openvswitch\") pod \"da9e7773-a24b-4e8d-b479-97e2594db0d4\" (UID: \"da9e7773-a24b-4e8d-b479-97e2594db0d4\") " Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.480738 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "da9e7773-a24b-4e8d-b479-97e2594db0d4" (UID: "da9e7773-a24b-4e8d-b479-97e2594db0d4"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.481172 4827 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.481207 4827 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.481229 4827 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.481248 4827 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.481265 4827 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.481282 4827 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/da9e7773-a24b-4e8d-b479-97e2594db0d4-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.481300 4827 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-slash\") on node \"crc\" DevicePath \"\"" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.481317 4827 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.481334 4827 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-log-socket\") on node \"crc\" DevicePath \"\"" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.481352 4827 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.481369 4827 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.481430 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "da9e7773-a24b-4e8d-b479-97e2594db0d4" (UID: "da9e7773-a24b-4e8d-b479-97e2594db0d4"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.481488 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "da9e7773-a24b-4e8d-b479-97e2594db0d4" (UID: "da9e7773-a24b-4e8d-b479-97e2594db0d4"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.481499 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da9e7773-a24b-4e8d-b479-97e2594db0d4-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "da9e7773-a24b-4e8d-b479-97e2594db0d4" (UID: "da9e7773-a24b-4e8d-b479-97e2594db0d4"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.481533 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-node-log" (OuterVolumeSpecName: "node-log") pod "da9e7773-a24b-4e8d-b479-97e2594db0d4" (UID: "da9e7773-a24b-4e8d-b479-97e2594db0d4"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.481634 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "da9e7773-a24b-4e8d-b479-97e2594db0d4" (UID: "da9e7773-a24b-4e8d-b479-97e2594db0d4"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.481845 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da9e7773-a24b-4e8d-b479-97e2594db0d4-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "da9e7773-a24b-4e8d-b479-97e2594db0d4" (UID: "da9e7773-a24b-4e8d-b479-97e2594db0d4"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.485379 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da9e7773-a24b-4e8d-b479-97e2594db0d4-kube-api-access-mt4z5" (OuterVolumeSpecName: "kube-api-access-mt4z5") pod "da9e7773-a24b-4e8d-b479-97e2594db0d4" (UID: "da9e7773-a24b-4e8d-b479-97e2594db0d4"). InnerVolumeSpecName "kube-api-access-mt4z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.486001 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da9e7773-a24b-4e8d-b479-97e2594db0d4-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "da9e7773-a24b-4e8d-b479-97e2594db0d4" (UID: "da9e7773-a24b-4e8d-b479-97e2594db0d4"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.495271 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "da9e7773-a24b-4e8d-b479-97e2594db0d4" (UID: "da9e7773-a24b-4e8d-b479-97e2594db0d4"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.537674 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jrz9x"] Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.537935 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovn-controller" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.537952 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovn-controller" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.537969 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="sbdb" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.537977 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="sbdb" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.537989 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovnkube-controller" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.537996 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovnkube-controller" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.538007 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovn-acl-logging" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.538015 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovn-acl-logging" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.538032 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="nbdb" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.538041 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="nbdb" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.538053 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="kube-rbac-proxy-node" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.538060 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="kube-rbac-proxy-node" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.538069 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.538076 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.538084 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovnkube-controller" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.538091 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovnkube-controller" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.538099 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="kubecfg-setup" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.538106 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="kubecfg-setup" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.538112 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovnkube-controller" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.538118 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovnkube-controller" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.538127 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="northd" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.538133 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="northd" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.538142 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovnkube-controller" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.538150 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovnkube-controller" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.538258 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovnkube-controller" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.538272 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="kube-rbac-proxy-node" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.538284 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.538291 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="northd" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.538297 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="sbdb" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.538307 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovn-acl-logging" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.538314 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="nbdb" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.538322 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovnkube-controller" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.538329 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovnkube-controller" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.538338 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovn-controller" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.538344 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovnkube-controller" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.538454 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovnkube-controller" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.538462 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovnkube-controller" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.539563 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerName="ovnkube-controller" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.541337 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583342 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-run-ovn\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583399 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-log-socket\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583422 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-systemd-units\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583455 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-host-run-ovn-kubernetes\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583483 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-ovnkube-config\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583500 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-var-lib-openvswitch\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583523 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-host-cni-bin\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583550 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-run-openvswitch\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583572 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-ovn-node-metrics-cert\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583612 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-etc-openvswitch\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583638 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583661 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmpjj\" (UniqueName: \"kubernetes.io/projected/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-kube-api-access-fmpjj\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583679 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-run-systemd\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583700 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-ovnkube-script-lib\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583735 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-host-run-netns\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583761 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-host-kubelet\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583782 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-env-overrides\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583806 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-node-log\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583824 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-host-cni-netd\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583838 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-host-slash\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583891 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt4z5\" (UniqueName: \"kubernetes.io/projected/da9e7773-a24b-4e8d-b479-97e2594db0d4-kube-api-access-mt4z5\") on node \"crc\" DevicePath \"\"" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583908 4827 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da9e7773-a24b-4e8d-b479-97e2594db0d4-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583921 4827 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583933 4827 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583947 4827 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-node-log\") on node \"crc\" DevicePath \"\"" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583956 4827 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da9e7773-a24b-4e8d-b479-97e2594db0d4-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583966 4827 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/da9e7773-a24b-4e8d-b479-97e2594db0d4-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583975 4827 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.583985 4827 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da9e7773-a24b-4e8d-b479-97e2594db0d4-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.685349 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-etc-openvswitch\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.685624 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.685483 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-etc-openvswitch\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.685694 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmpjj\" (UniqueName: \"kubernetes.io/projected/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-kube-api-access-fmpjj\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.685794 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.685807 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-run-systemd\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.685899 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-ovnkube-script-lib\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.685924 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-host-run-netns\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.685982 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-host-kubelet\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686010 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-env-overrides\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686063 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-node-log\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686089 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-host-kubelet\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686094 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-host-slash\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686116 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-host-slash\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686125 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-host-cni-netd\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686143 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-node-log\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686064 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-run-systemd\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686159 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-run-ovn\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686176 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-log-socket\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686179 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-host-run-netns\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686191 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-systemd-units\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686205 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-run-ovn\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686213 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-host-run-ovn-kubernetes\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686230 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-ovnkube-config\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686229 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-host-cni-netd\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686243 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-var-lib-openvswitch\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686256 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-systemd-units\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686264 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-host-cni-bin\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686283 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-run-openvswitch\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686282 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-log-socket\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686302 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-ovn-node-metrics-cert\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686716 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-ovnkube-script-lib\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686808 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-host-cni-bin\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686848 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-var-lib-openvswitch\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686893 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-ovnkube-config\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686899 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-run-openvswitch\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.686932 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-host-run-ovn-kubernetes\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.687278 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-env-overrides\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.690363 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-ovn-node-metrics-cert\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.703149 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmpjj\" (UniqueName: \"kubernetes.io/projected/2c275c73-42b4-4d12-98ff-15f27b3dbf1c-kube-api-access-fmpjj\") pod \"ovnkube-node-jrz9x\" (UID: \"2c275c73-42b4-4d12-98ff-15f27b3dbf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.771590 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hj2zw_da9e7773-a24b-4e8d-b479-97e2594db0d4/ovnkube-controller/3.log" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.774387 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hj2zw_da9e7773-a24b-4e8d-b479-97e2594db0d4/ovn-acl-logging/0.log" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.774926 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hj2zw_da9e7773-a24b-4e8d-b479-97e2594db0d4/ovn-controller/0.log" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775341 4827 generic.go:334] "Generic (PLEG): container finished" podID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerID="cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c" exitCode=0 Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775370 4827 generic.go:334] "Generic (PLEG): container finished" podID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerID="2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d" exitCode=0 Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775380 4827 generic.go:334] "Generic (PLEG): container finished" podID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerID="70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6" exitCode=0 Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775389 4827 generic.go:334] "Generic (PLEG): container finished" podID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerID="32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48" exitCode=0 Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775396 4827 generic.go:334] "Generic (PLEG): container finished" podID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerID="2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753" exitCode=0 Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775404 4827 generic.go:334] "Generic (PLEG): container finished" podID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerID="bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918" exitCode=0 Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775411 4827 generic.go:334] "Generic (PLEG): container finished" podID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerID="5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44" exitCode=143 Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775420 4827 generic.go:334] "Generic (PLEG): container finished" podID="da9e7773-a24b-4e8d-b479-97e2594db0d4" containerID="ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f" exitCode=143 Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775479 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerDied","Data":"cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775507 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerDied","Data":"2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775521 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerDied","Data":"70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775526 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775532 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerDied","Data":"32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775655 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerDied","Data":"2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775668 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerDied","Data":"bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775679 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775690 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775696 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775702 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775707 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775713 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775719 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775724 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775730 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775738 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerDied","Data":"5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775747 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775754 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775760 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775768 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775775 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775781 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775788 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775795 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775801 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775809 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775817 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerDied","Data":"ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775828 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775836 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775842 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775849 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775855 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775862 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775869 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775890 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775897 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775903 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775913 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hj2zw" event={"ID":"da9e7773-a24b-4e8d-b479-97e2594db0d4","Type":"ContainerDied","Data":"0c444b232675ca762b9f3eab59ec84cb7f6dfaa929886cbc5df072a80133ff73"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775924 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775934 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775941 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775950 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775957 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775964 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775971 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775977 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775983 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775989 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.775547 4827 scope.go:117] "RemoveContainer" containerID="cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.778231 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q9q8q_a696063c-4553-4032-8038-9900f09d4031/kube-multus/2.log" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.778812 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q9q8q_a696063c-4553-4032-8038-9900f09d4031/kube-multus/1.log" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.778851 4827 generic.go:334] "Generic (PLEG): container finished" podID="a696063c-4553-4032-8038-9900f09d4031" containerID="ece84892ef77f9e6974ebeca6ed9ded8a17232182f0b1775f9230aea9422d6c1" exitCode=2 Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.779008 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q9q8q" event={"ID":"a696063c-4553-4032-8038-9900f09d4031","Type":"ContainerDied","Data":"ece84892ef77f9e6974ebeca6ed9ded8a17232182f0b1775f9230aea9422d6c1"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.779041 4827 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b6b5307b128e69f814b56eb826859eb4b02d6645d1665c1f5d205492590135ef"} Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.779455 4827 scope.go:117] "RemoveContainer" containerID="ece84892ef77f9e6974ebeca6ed9ded8a17232182f0b1775f9230aea9422d6c1" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.779614 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-q9q8q_openshift-multus(a696063c-4553-4032-8038-9900f09d4031)\"" pod="openshift-multus/multus-q9q8q" podUID="a696063c-4553-4032-8038-9900f09d4031" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.796465 4827 scope.go:117] "RemoveContainer" containerID="c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.812639 4827 scope.go:117] "RemoveContainer" containerID="2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.822982 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hj2zw"] Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.825937 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hj2zw"] Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.833826 4827 scope.go:117] "RemoveContainer" containerID="70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.845129 4827 scope.go:117] "RemoveContainer" containerID="32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.864873 4827 scope.go:117] "RemoveContainer" containerID="2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.875226 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.882561 4827 scope.go:117] "RemoveContainer" containerID="bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.905640 4827 scope.go:117] "RemoveContainer" containerID="5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.931701 4827 scope.go:117] "RemoveContainer" containerID="ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.953312 4827 scope.go:117] "RemoveContainer" containerID="96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.971452 4827 scope.go:117] "RemoveContainer" containerID="cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.972001 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c\": container with ID starting with cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c not found: ID does not exist" containerID="cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.972035 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c"} err="failed to get container status \"cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c\": rpc error: code = NotFound desc = could not find container \"cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c\": container with ID starting with cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.972058 4827 scope.go:117] "RemoveContainer" containerID="c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.972504 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4\": container with ID starting with c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4 not found: ID does not exist" containerID="c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.972536 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4"} err="failed to get container status \"c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4\": rpc error: code = NotFound desc = could not find container \"c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4\": container with ID starting with c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.972553 4827 scope.go:117] "RemoveContainer" containerID="2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.972814 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\": container with ID starting with 2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d not found: ID does not exist" containerID="2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.972842 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d"} err="failed to get container status \"2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\": rpc error: code = NotFound desc = could not find container \"2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\": container with ID starting with 2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.972858 4827 scope.go:117] "RemoveContainer" containerID="70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.974218 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\": container with ID starting with 70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6 not found: ID does not exist" containerID="70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.974244 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6"} err="failed to get container status \"70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\": rpc error: code = NotFound desc = could not find container \"70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\": container with ID starting with 70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.974259 4827 scope.go:117] "RemoveContainer" containerID="32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.974507 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\": container with ID starting with 32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48 not found: ID does not exist" containerID="32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.974530 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48"} err="failed to get container status \"32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\": rpc error: code = NotFound desc = could not find container \"32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\": container with ID starting with 32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.974545 4827 scope.go:117] "RemoveContainer" containerID="2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.974928 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\": container with ID starting with 2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753 not found: ID does not exist" containerID="2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.974952 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753"} err="failed to get container status \"2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\": rpc error: code = NotFound desc = could not find container \"2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\": container with ID starting with 2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.974967 4827 scope.go:117] "RemoveContainer" containerID="bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.975684 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\": container with ID starting with bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918 not found: ID does not exist" containerID="bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.975705 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918"} err="failed to get container status \"bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\": rpc error: code = NotFound desc = could not find container \"bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\": container with ID starting with bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.975723 4827 scope.go:117] "RemoveContainer" containerID="5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.976027 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\": container with ID starting with 5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44 not found: ID does not exist" containerID="5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.976052 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44"} err="failed to get container status \"5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\": rpc error: code = NotFound desc = could not find container \"5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\": container with ID starting with 5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.976066 4827 scope.go:117] "RemoveContainer" containerID="ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.976344 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\": container with ID starting with ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f not found: ID does not exist" containerID="ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.976371 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f"} err="failed to get container status \"ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\": rpc error: code = NotFound desc = could not find container \"ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\": container with ID starting with ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.976386 4827 scope.go:117] "RemoveContainer" containerID="96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c" Jan 31 03:57:57 crc kubenswrapper[4827]: E0131 03:57:57.976640 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\": container with ID starting with 96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c not found: ID does not exist" containerID="96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.976750 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c"} err="failed to get container status \"96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\": rpc error: code = NotFound desc = could not find container \"96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\": container with ID starting with 96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.976848 4827 scope.go:117] "RemoveContainer" containerID="cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.977325 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c"} err="failed to get container status \"cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c\": rpc error: code = NotFound desc = could not find container \"cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c\": container with ID starting with cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.977349 4827 scope.go:117] "RemoveContainer" containerID="c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.977608 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4"} err="failed to get container status \"c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4\": rpc error: code = NotFound desc = could not find container \"c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4\": container with ID starting with c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.977628 4827 scope.go:117] "RemoveContainer" containerID="2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.978347 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d"} err="failed to get container status \"2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\": rpc error: code = NotFound desc = could not find container \"2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\": container with ID starting with 2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.978375 4827 scope.go:117] "RemoveContainer" containerID="70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.978680 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6"} err="failed to get container status \"70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\": rpc error: code = NotFound desc = could not find container \"70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\": container with ID starting with 70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.978786 4827 scope.go:117] "RemoveContainer" containerID="32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.979331 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48"} err="failed to get container status \"32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\": rpc error: code = NotFound desc = could not find container \"32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\": container with ID starting with 32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.979362 4827 scope.go:117] "RemoveContainer" containerID="2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.979727 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753"} err="failed to get container status \"2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\": rpc error: code = NotFound desc = could not find container \"2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\": container with ID starting with 2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.979751 4827 scope.go:117] "RemoveContainer" containerID="bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.980140 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918"} err="failed to get container status \"bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\": rpc error: code = NotFound desc = could not find container \"bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\": container with ID starting with bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.980257 4827 scope.go:117] "RemoveContainer" containerID="5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.980614 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44"} err="failed to get container status \"5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\": rpc error: code = NotFound desc = could not find container \"5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\": container with ID starting with 5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.980638 4827 scope.go:117] "RemoveContainer" containerID="ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.980914 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f"} err="failed to get container status \"ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\": rpc error: code = NotFound desc = could not find container \"ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\": container with ID starting with ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.981062 4827 scope.go:117] "RemoveContainer" containerID="96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.981431 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c"} err="failed to get container status \"96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\": rpc error: code = NotFound desc = could not find container \"96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\": container with ID starting with 96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.981462 4827 scope.go:117] "RemoveContainer" containerID="cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.981696 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c"} err="failed to get container status \"cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c\": rpc error: code = NotFound desc = could not find container \"cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c\": container with ID starting with cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.981724 4827 scope.go:117] "RemoveContainer" containerID="c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.982073 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4"} err="failed to get container status \"c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4\": rpc error: code = NotFound desc = could not find container \"c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4\": container with ID starting with c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.982094 4827 scope.go:117] "RemoveContainer" containerID="2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.982394 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d"} err="failed to get container status \"2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\": rpc error: code = NotFound desc = could not find container \"2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\": container with ID starting with 2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.982423 4827 scope.go:117] "RemoveContainer" containerID="70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.982683 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6"} err="failed to get container status \"70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\": rpc error: code = NotFound desc = could not find container \"70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\": container with ID starting with 70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.982707 4827 scope.go:117] "RemoveContainer" containerID="32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.983094 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48"} err="failed to get container status \"32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\": rpc error: code = NotFound desc = could not find container \"32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\": container with ID starting with 32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.983114 4827 scope.go:117] "RemoveContainer" containerID="2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.983645 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753"} err="failed to get container status \"2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\": rpc error: code = NotFound desc = could not find container \"2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\": container with ID starting with 2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.983670 4827 scope.go:117] "RemoveContainer" containerID="bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.983917 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918"} err="failed to get container status \"bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\": rpc error: code = NotFound desc = could not find container \"bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\": container with ID starting with bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.984214 4827 scope.go:117] "RemoveContainer" containerID="5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.985105 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44"} err="failed to get container status \"5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\": rpc error: code = NotFound desc = could not find container \"5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\": container with ID starting with 5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.985217 4827 scope.go:117] "RemoveContainer" containerID="ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.985514 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f"} err="failed to get container status \"ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\": rpc error: code = NotFound desc = could not find container \"ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\": container with ID starting with ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.985612 4827 scope.go:117] "RemoveContainer" containerID="96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.988943 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c"} err="failed to get container status \"96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\": rpc error: code = NotFound desc = could not find container \"96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\": container with ID starting with 96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.989037 4827 scope.go:117] "RemoveContainer" containerID="cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.989590 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c"} err="failed to get container status \"cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c\": rpc error: code = NotFound desc = could not find container \"cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c\": container with ID starting with cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.989721 4827 scope.go:117] "RemoveContainer" containerID="c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.990177 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4"} err="failed to get container status \"c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4\": rpc error: code = NotFound desc = could not find container \"c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4\": container with ID starting with c4e52d1c48e767ecfcf769c9cb24d896cb851fb3a791c2f80e30c3746e971ad4 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.990238 4827 scope.go:117] "RemoveContainer" containerID="2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.990634 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d"} err="failed to get container status \"2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\": rpc error: code = NotFound desc = could not find container \"2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d\": container with ID starting with 2a0cc10519a103d1d49acd3e239636f7ac9b3a9daa533ce1f485b26ccfffad3d not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.990670 4827 scope.go:117] "RemoveContainer" containerID="70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.990960 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6"} err="failed to get container status \"70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\": rpc error: code = NotFound desc = could not find container \"70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6\": container with ID starting with 70200918240e4bc9d8f7fcb1d95be3721faa9394058a1a9671da44d9b7a915e6 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.990991 4827 scope.go:117] "RemoveContainer" containerID="32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.991313 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48"} err="failed to get container status \"32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\": rpc error: code = NotFound desc = could not find container \"32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48\": container with ID starting with 32e69930c4476bd3a2f767f012fb4a2e00de7d46da936c6f0d31029994108e48 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.991382 4827 scope.go:117] "RemoveContainer" containerID="2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.991735 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753"} err="failed to get container status \"2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\": rpc error: code = NotFound desc = could not find container \"2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753\": container with ID starting with 2485bd5d17713b9b9184c6b56b1dfda19a229476eabd904d3eda350c30892753 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.991768 4827 scope.go:117] "RemoveContainer" containerID="bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.992020 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918"} err="failed to get container status \"bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\": rpc error: code = NotFound desc = could not find container \"bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918\": container with ID starting with bb9ced0bc1beedf1c5779410e90fa61227f5df3de7fa950ab69e1ef44b4a4918 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.992047 4827 scope.go:117] "RemoveContainer" containerID="5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.992243 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44"} err="failed to get container status \"5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\": rpc error: code = NotFound desc = could not find container \"5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44\": container with ID starting with 5ae8d3d61d2579c0afddd6c9728935c463fec8760a9fc830473c905de5f40c44 not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.992271 4827 scope.go:117] "RemoveContainer" containerID="ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.992560 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f"} err="failed to get container status \"ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\": rpc error: code = NotFound desc = could not find container \"ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f\": container with ID starting with ed1f4882b03bd25a6073ed3ddeb73d5a0da61c8a8af3ebfab5c90a83ce6af80f not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.992589 4827 scope.go:117] "RemoveContainer" containerID="96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.992811 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c"} err="failed to get container status \"96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\": rpc error: code = NotFound desc = could not find container \"96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c\": container with ID starting with 96ea399662528dfc8c4bac4c2b710685cb5897e67739704548ca01353d72672c not found: ID does not exist" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.992862 4827 scope.go:117] "RemoveContainer" containerID="cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c" Jan 31 03:57:57 crc kubenswrapper[4827]: I0131 03:57:57.993088 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c"} err="failed to get container status \"cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c\": rpc error: code = NotFound desc = could not find container \"cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c\": container with ID starting with cea8767bf7ec05e1a8900fe5b7ec188cd3109073dc3bbf6571d53b501934370c not found: ID does not exist" Jan 31 03:57:58 crc kubenswrapper[4827]: I0131 03:57:58.118853 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da9e7773-a24b-4e8d-b479-97e2594db0d4" path="/var/lib/kubelet/pods/da9e7773-a24b-4e8d-b479-97e2594db0d4/volumes" Jan 31 03:57:58 crc kubenswrapper[4827]: I0131 03:57:58.786399 4827 generic.go:334] "Generic (PLEG): container finished" podID="2c275c73-42b4-4d12-98ff-15f27b3dbf1c" containerID="f47ba5b68287d39d2c45f79e9fcd16d3888c06f7bec56405f649053593700874" exitCode=0 Jan 31 03:57:58 crc kubenswrapper[4827]: I0131 03:57:58.786508 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" event={"ID":"2c275c73-42b4-4d12-98ff-15f27b3dbf1c","Type":"ContainerDied","Data":"f47ba5b68287d39d2c45f79e9fcd16d3888c06f7bec56405f649053593700874"} Jan 31 03:57:58 crc kubenswrapper[4827]: I0131 03:57:58.786815 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" event={"ID":"2c275c73-42b4-4d12-98ff-15f27b3dbf1c","Type":"ContainerStarted","Data":"0bad7177c73db9fbadefb22d9739f572470f8ffc3565f722e87c465cadceac36"} Jan 31 03:57:59 crc kubenswrapper[4827]: I0131 03:57:59.799829 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" event={"ID":"2c275c73-42b4-4d12-98ff-15f27b3dbf1c","Type":"ContainerStarted","Data":"860ebef3f19381bd989ecfbff7065ff29de40b6dfc323dc0fae247a56c322cf3"} Jan 31 03:57:59 crc kubenswrapper[4827]: I0131 03:57:59.800298 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" event={"ID":"2c275c73-42b4-4d12-98ff-15f27b3dbf1c","Type":"ContainerStarted","Data":"82d43529939b8c758f4d742dd4935d7f5468b357867ef02835b0293612913445"} Jan 31 03:57:59 crc kubenswrapper[4827]: I0131 03:57:59.800313 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" event={"ID":"2c275c73-42b4-4d12-98ff-15f27b3dbf1c","Type":"ContainerStarted","Data":"becdfb318b335d9ef216bdd0379c7bce09380a9ee22455c3bcc426634526686b"} Jan 31 03:57:59 crc kubenswrapper[4827]: I0131 03:57:59.800325 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" event={"ID":"2c275c73-42b4-4d12-98ff-15f27b3dbf1c","Type":"ContainerStarted","Data":"7fe2572304e1e817f67507e2f35bdb616989e80f9b254ae24deeacba0e715220"} Jan 31 03:57:59 crc kubenswrapper[4827]: I0131 03:57:59.800338 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" event={"ID":"2c275c73-42b4-4d12-98ff-15f27b3dbf1c","Type":"ContainerStarted","Data":"b1dbd40b8cb50cb5737cc72be57b2eab74a0301baa1d6529b031a3e210ada6e5"} Jan 31 03:57:59 crc kubenswrapper[4827]: I0131 03:57:59.800350 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" event={"ID":"2c275c73-42b4-4d12-98ff-15f27b3dbf1c","Type":"ContainerStarted","Data":"d0c134b8921528bf59a0cfb1f3639cf7bc39cf874f710b4f0c77e7960424b95d"} Jan 31 03:58:02 crc kubenswrapper[4827]: I0131 03:58:02.818999 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" event={"ID":"2c275c73-42b4-4d12-98ff-15f27b3dbf1c","Type":"ContainerStarted","Data":"e6c0d860fdd98901e7d0482891f8b2f5dd696b925f1ff3e5ab0e89979bce18a5"} Jan 31 03:58:04 crc kubenswrapper[4827]: I0131 03:58:04.834396 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" event={"ID":"2c275c73-42b4-4d12-98ff-15f27b3dbf1c","Type":"ContainerStarted","Data":"210fd1ab4ece0cd37bfc6aa79c42c31302a3231d5207f3991cf62673b8527c37"} Jan 31 03:58:04 crc kubenswrapper[4827]: I0131 03:58:04.835947 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:58:04 crc kubenswrapper[4827]: I0131 03:58:04.835970 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:58:04 crc kubenswrapper[4827]: I0131 03:58:04.836015 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:58:04 crc kubenswrapper[4827]: I0131 03:58:04.867689 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:58:04 crc kubenswrapper[4827]: I0131 03:58:04.871459 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" podStartSLOduration=7.871434623 podStartE2EDuration="7.871434623s" podCreationTimestamp="2026-01-31 03:57:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:58:04.86713158 +0000 UTC m=+677.554212039" watchObservedRunningTime="2026-01-31 03:58:04.871434623 +0000 UTC m=+677.558515092" Jan 31 03:58:04 crc kubenswrapper[4827]: I0131 03:58:04.880431 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:58:09 crc kubenswrapper[4827]: I0131 03:58:09.110268 4827 scope.go:117] "RemoveContainer" containerID="ece84892ef77f9e6974ebeca6ed9ded8a17232182f0b1775f9230aea9422d6c1" Jan 31 03:58:09 crc kubenswrapper[4827]: E0131 03:58:09.112310 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-q9q8q_openshift-multus(a696063c-4553-4032-8038-9900f09d4031)\"" pod="openshift-multus/multus-q9q8q" podUID="a696063c-4553-4032-8038-9900f09d4031" Jan 31 03:58:22 crc kubenswrapper[4827]: I0131 03:58:22.110259 4827 scope.go:117] "RemoveContainer" containerID="ece84892ef77f9e6974ebeca6ed9ded8a17232182f0b1775f9230aea9422d6c1" Jan 31 03:58:22 crc kubenswrapper[4827]: I0131 03:58:22.953782 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q9q8q_a696063c-4553-4032-8038-9900f09d4031/kube-multus/2.log" Jan 31 03:58:22 crc kubenswrapper[4827]: I0131 03:58:22.954736 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q9q8q_a696063c-4553-4032-8038-9900f09d4031/kube-multus/1.log" Jan 31 03:58:22 crc kubenswrapper[4827]: I0131 03:58:22.955025 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q9q8q" event={"ID":"a696063c-4553-4032-8038-9900f09d4031","Type":"ContainerStarted","Data":"d5a66ea53fd48995eead42d003be2d5f57d092667bf146fbc6436db993c29fc8"} Jan 31 03:58:27 crc kubenswrapper[4827]: I0131 03:58:27.895014 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jrz9x" Jan 31 03:58:35 crc kubenswrapper[4827]: I0131 03:58:35.838946 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4"] Jan 31 03:58:35 crc kubenswrapper[4827]: I0131 03:58:35.841414 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4" Jan 31 03:58:35 crc kubenswrapper[4827]: I0131 03:58:35.844250 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 03:58:35 crc kubenswrapper[4827]: I0131 03:58:35.852247 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4"] Jan 31 03:58:35 crc kubenswrapper[4827]: I0131 03:58:35.980647 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3bd71c58-cce1-40f3-b951-8b414eec7cd6-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4\" (UID: \"3bd71c58-cce1-40f3-b951-8b414eec7cd6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4" Jan 31 03:58:35 crc kubenswrapper[4827]: I0131 03:58:35.980755 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3bd71c58-cce1-40f3-b951-8b414eec7cd6-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4\" (UID: \"3bd71c58-cce1-40f3-b951-8b414eec7cd6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4" Jan 31 03:58:35 crc kubenswrapper[4827]: I0131 03:58:35.980819 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hlbh\" (UniqueName: \"kubernetes.io/projected/3bd71c58-cce1-40f3-b951-8b414eec7cd6-kube-api-access-8hlbh\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4\" (UID: \"3bd71c58-cce1-40f3-b951-8b414eec7cd6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4" Jan 31 03:58:36 crc kubenswrapper[4827]: I0131 03:58:36.082275 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3bd71c58-cce1-40f3-b951-8b414eec7cd6-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4\" (UID: \"3bd71c58-cce1-40f3-b951-8b414eec7cd6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4" Jan 31 03:58:36 crc kubenswrapper[4827]: I0131 03:58:36.082347 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hlbh\" (UniqueName: \"kubernetes.io/projected/3bd71c58-cce1-40f3-b951-8b414eec7cd6-kube-api-access-8hlbh\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4\" (UID: \"3bd71c58-cce1-40f3-b951-8b414eec7cd6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4" Jan 31 03:58:36 crc kubenswrapper[4827]: I0131 03:58:36.082390 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3bd71c58-cce1-40f3-b951-8b414eec7cd6-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4\" (UID: \"3bd71c58-cce1-40f3-b951-8b414eec7cd6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4" Jan 31 03:58:36 crc kubenswrapper[4827]: I0131 03:58:36.082793 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3bd71c58-cce1-40f3-b951-8b414eec7cd6-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4\" (UID: \"3bd71c58-cce1-40f3-b951-8b414eec7cd6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4" Jan 31 03:58:36 crc kubenswrapper[4827]: I0131 03:58:36.082856 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3bd71c58-cce1-40f3-b951-8b414eec7cd6-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4\" (UID: \"3bd71c58-cce1-40f3-b951-8b414eec7cd6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4" Jan 31 03:58:36 crc kubenswrapper[4827]: I0131 03:58:36.112566 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hlbh\" (UniqueName: \"kubernetes.io/projected/3bd71c58-cce1-40f3-b951-8b414eec7cd6-kube-api-access-8hlbh\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4\" (UID: \"3bd71c58-cce1-40f3-b951-8b414eec7cd6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4" Jan 31 03:58:36 crc kubenswrapper[4827]: I0131 03:58:36.180150 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4" Jan 31 03:58:36 crc kubenswrapper[4827]: I0131 03:58:36.468818 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4"] Jan 31 03:58:37 crc kubenswrapper[4827]: I0131 03:58:37.063847 4827 generic.go:334] "Generic (PLEG): container finished" podID="3bd71c58-cce1-40f3-b951-8b414eec7cd6" containerID="a388f0fd7a6031822ac2b4c26295e86546b3ba942cde74441a1def444771786f" exitCode=0 Jan 31 03:58:37 crc kubenswrapper[4827]: I0131 03:58:37.064013 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4" event={"ID":"3bd71c58-cce1-40f3-b951-8b414eec7cd6","Type":"ContainerDied","Data":"a388f0fd7a6031822ac2b4c26295e86546b3ba942cde74441a1def444771786f"} Jan 31 03:58:37 crc kubenswrapper[4827]: I0131 03:58:37.064045 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4" event={"ID":"3bd71c58-cce1-40f3-b951-8b414eec7cd6","Type":"ContainerStarted","Data":"22dc3a8246f0ede98efefe4d42996c46115436f9c6fd9d9f990ca051e4ba2d01"} Jan 31 03:58:39 crc kubenswrapper[4827]: I0131 03:58:39.077682 4827 generic.go:334] "Generic (PLEG): container finished" podID="3bd71c58-cce1-40f3-b951-8b414eec7cd6" containerID="a241680a63bc211a2459c40268b13369755c3a3c9b6b45a5b8d42e1956f1aa62" exitCode=0 Jan 31 03:58:39 crc kubenswrapper[4827]: I0131 03:58:39.077773 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4" event={"ID":"3bd71c58-cce1-40f3-b951-8b414eec7cd6","Type":"ContainerDied","Data":"a241680a63bc211a2459c40268b13369755c3a3c9b6b45a5b8d42e1956f1aa62"} Jan 31 03:58:40 crc kubenswrapper[4827]: I0131 03:58:40.088995 4827 generic.go:334] "Generic (PLEG): container finished" podID="3bd71c58-cce1-40f3-b951-8b414eec7cd6" containerID="f487fb4877e25c758111912a6a194f1aad8eb94a71422d184510eb682b79de4e" exitCode=0 Jan 31 03:58:40 crc kubenswrapper[4827]: I0131 03:58:40.089057 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4" event={"ID":"3bd71c58-cce1-40f3-b951-8b414eec7cd6","Type":"ContainerDied","Data":"f487fb4877e25c758111912a6a194f1aad8eb94a71422d184510eb682b79de4e"} Jan 31 03:58:41 crc kubenswrapper[4827]: I0131 03:58:41.409314 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4" Jan 31 03:58:41 crc kubenswrapper[4827]: I0131 03:58:41.460090 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3bd71c58-cce1-40f3-b951-8b414eec7cd6-util\") pod \"3bd71c58-cce1-40f3-b951-8b414eec7cd6\" (UID: \"3bd71c58-cce1-40f3-b951-8b414eec7cd6\") " Jan 31 03:58:41 crc kubenswrapper[4827]: I0131 03:58:41.476131 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bd71c58-cce1-40f3-b951-8b414eec7cd6-util" (OuterVolumeSpecName: "util") pod "3bd71c58-cce1-40f3-b951-8b414eec7cd6" (UID: "3bd71c58-cce1-40f3-b951-8b414eec7cd6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:58:41 crc kubenswrapper[4827]: I0131 03:58:41.560946 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3bd71c58-cce1-40f3-b951-8b414eec7cd6-bundle\") pod \"3bd71c58-cce1-40f3-b951-8b414eec7cd6\" (UID: \"3bd71c58-cce1-40f3-b951-8b414eec7cd6\") " Jan 31 03:58:41 crc kubenswrapper[4827]: I0131 03:58:41.561004 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hlbh\" (UniqueName: \"kubernetes.io/projected/3bd71c58-cce1-40f3-b951-8b414eec7cd6-kube-api-access-8hlbh\") pod \"3bd71c58-cce1-40f3-b951-8b414eec7cd6\" (UID: \"3bd71c58-cce1-40f3-b951-8b414eec7cd6\") " Jan 31 03:58:41 crc kubenswrapper[4827]: I0131 03:58:41.561248 4827 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3bd71c58-cce1-40f3-b951-8b414eec7cd6-util\") on node \"crc\" DevicePath \"\"" Jan 31 03:58:41 crc kubenswrapper[4827]: I0131 03:58:41.562667 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bd71c58-cce1-40f3-b951-8b414eec7cd6-bundle" (OuterVolumeSpecName: "bundle") pod "3bd71c58-cce1-40f3-b951-8b414eec7cd6" (UID: "3bd71c58-cce1-40f3-b951-8b414eec7cd6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:58:41 crc kubenswrapper[4827]: I0131 03:58:41.572283 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd71c58-cce1-40f3-b951-8b414eec7cd6-kube-api-access-8hlbh" (OuterVolumeSpecName: "kube-api-access-8hlbh") pod "3bd71c58-cce1-40f3-b951-8b414eec7cd6" (UID: "3bd71c58-cce1-40f3-b951-8b414eec7cd6"). InnerVolumeSpecName "kube-api-access-8hlbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:58:41 crc kubenswrapper[4827]: I0131 03:58:41.662312 4827 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3bd71c58-cce1-40f3-b951-8b414eec7cd6-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 03:58:41 crc kubenswrapper[4827]: I0131 03:58:41.662352 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hlbh\" (UniqueName: \"kubernetes.io/projected/3bd71c58-cce1-40f3-b951-8b414eec7cd6-kube-api-access-8hlbh\") on node \"crc\" DevicePath \"\"" Jan 31 03:58:42 crc kubenswrapper[4827]: I0131 03:58:42.106973 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4" event={"ID":"3bd71c58-cce1-40f3-b951-8b414eec7cd6","Type":"ContainerDied","Data":"22dc3a8246f0ede98efefe4d42996c46115436f9c6fd9d9f990ca051e4ba2d01"} Jan 31 03:58:42 crc kubenswrapper[4827]: I0131 03:58:42.107026 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22dc3a8246f0ede98efefe4d42996c46115436f9c6fd9d9f990ca051e4ba2d01" Jan 31 03:58:42 crc kubenswrapper[4827]: I0131 03:58:42.107060 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4" Jan 31 03:58:45 crc kubenswrapper[4827]: I0131 03:58:45.034210 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-g5wcr"] Jan 31 03:58:45 crc kubenswrapper[4827]: E0131 03:58:45.034855 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd71c58-cce1-40f3-b951-8b414eec7cd6" containerName="extract" Jan 31 03:58:45 crc kubenswrapper[4827]: I0131 03:58:45.034873 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd71c58-cce1-40f3-b951-8b414eec7cd6" containerName="extract" Jan 31 03:58:45 crc kubenswrapper[4827]: E0131 03:58:45.034925 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd71c58-cce1-40f3-b951-8b414eec7cd6" containerName="pull" Jan 31 03:58:45 crc kubenswrapper[4827]: I0131 03:58:45.034934 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd71c58-cce1-40f3-b951-8b414eec7cd6" containerName="pull" Jan 31 03:58:45 crc kubenswrapper[4827]: E0131 03:58:45.034946 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd71c58-cce1-40f3-b951-8b414eec7cd6" containerName="util" Jan 31 03:58:45 crc kubenswrapper[4827]: I0131 03:58:45.034954 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd71c58-cce1-40f3-b951-8b414eec7cd6" containerName="util" Jan 31 03:58:45 crc kubenswrapper[4827]: I0131 03:58:45.035112 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd71c58-cce1-40f3-b951-8b414eec7cd6" containerName="extract" Jan 31 03:58:45 crc kubenswrapper[4827]: I0131 03:58:45.035593 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-g5wcr" Jan 31 03:58:45 crc kubenswrapper[4827]: I0131 03:58:45.038045 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 31 03:58:45 crc kubenswrapper[4827]: I0131 03:58:45.039156 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 31 03:58:45 crc kubenswrapper[4827]: I0131 03:58:45.046039 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-g5wcr"] Jan 31 03:58:45 crc kubenswrapper[4827]: I0131 03:58:45.050501 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-zprkm" Jan 31 03:58:45 crc kubenswrapper[4827]: I0131 03:58:45.114674 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trzcz\" (UniqueName: \"kubernetes.io/projected/5899df86-4812-4477-92fb-bcd326c34f2a-kube-api-access-trzcz\") pod \"nmstate-operator-646758c888-g5wcr\" (UID: \"5899df86-4812-4477-92fb-bcd326c34f2a\") " pod="openshift-nmstate/nmstate-operator-646758c888-g5wcr" Jan 31 03:58:45 crc kubenswrapper[4827]: I0131 03:58:45.216531 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trzcz\" (UniqueName: \"kubernetes.io/projected/5899df86-4812-4477-92fb-bcd326c34f2a-kube-api-access-trzcz\") pod \"nmstate-operator-646758c888-g5wcr\" (UID: \"5899df86-4812-4477-92fb-bcd326c34f2a\") " pod="openshift-nmstate/nmstate-operator-646758c888-g5wcr" Jan 31 03:58:45 crc kubenswrapper[4827]: I0131 03:58:45.238298 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trzcz\" (UniqueName: \"kubernetes.io/projected/5899df86-4812-4477-92fb-bcd326c34f2a-kube-api-access-trzcz\") pod \"nmstate-operator-646758c888-g5wcr\" (UID: \"5899df86-4812-4477-92fb-bcd326c34f2a\") " pod="openshift-nmstate/nmstate-operator-646758c888-g5wcr" Jan 31 03:58:45 crc kubenswrapper[4827]: I0131 03:58:45.355654 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-g5wcr" Jan 31 03:58:45 crc kubenswrapper[4827]: I0131 03:58:45.587700 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-g5wcr"] Jan 31 03:58:46 crc kubenswrapper[4827]: I0131 03:58:46.126624 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-g5wcr" event={"ID":"5899df86-4812-4477-92fb-bcd326c34f2a","Type":"ContainerStarted","Data":"1df1c5e254cd38b277b7a9ce756d78594f437d5006a44024aa1fc97698550766"} Jan 31 03:58:47 crc kubenswrapper[4827]: I0131 03:58:47.371082 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 03:58:47 crc kubenswrapper[4827]: I0131 03:58:47.371505 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 03:58:48 crc kubenswrapper[4827]: I0131 03:58:48.149220 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-g5wcr" event={"ID":"5899df86-4812-4477-92fb-bcd326c34f2a","Type":"ContainerStarted","Data":"98e847c4df22c280b3839fcb3bb85848a760a6aca922edbb9b63f96b513abbb2"} Jan 31 03:58:48 crc kubenswrapper[4827]: I0131 03:58:48.178987 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-g5wcr" podStartSLOduration=1.086652313 podStartE2EDuration="3.178876235s" podCreationTimestamp="2026-01-31 03:58:45 +0000 UTC" firstStartedPulling="2026-01-31 03:58:45.597061731 +0000 UTC m=+718.284142180" lastFinishedPulling="2026-01-31 03:58:47.689285653 +0000 UTC m=+720.376366102" observedRunningTime="2026-01-31 03:58:48.169473835 +0000 UTC m=+720.856554324" watchObservedRunningTime="2026-01-31 03:58:48.178876235 +0000 UTC m=+720.865956714" Jan 31 03:58:54 crc kubenswrapper[4827]: I0131 03:58:54.360268 4827 scope.go:117] "RemoveContainer" containerID="b6b5307b128e69f814b56eb826859eb4b02d6645d1665c1f5d205492590135ef" Jan 31 03:58:55 crc kubenswrapper[4827]: I0131 03:58:55.200412 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q9q8q_a696063c-4553-4032-8038-9900f09d4031/kube-multus/2.log" Jan 31 03:58:55 crc kubenswrapper[4827]: I0131 03:58:55.902336 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-vxb8z"] Jan 31 03:58:55 crc kubenswrapper[4827]: I0131 03:58:55.903280 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-vxb8z" Jan 31 03:58:55 crc kubenswrapper[4827]: I0131 03:58:55.905708 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-zdnpw" Jan 31 03:58:55 crc kubenswrapper[4827]: I0131 03:58:55.912263 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-blkw2"] Jan 31 03:58:55 crc kubenswrapper[4827]: I0131 03:58:55.913114 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blkw2" Jan 31 03:58:55 crc kubenswrapper[4827]: I0131 03:58:55.915650 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 31 03:58:55 crc kubenswrapper[4827]: I0131 03:58:55.936291 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-blkw2"] Jan 31 03:58:55 crc kubenswrapper[4827]: I0131 03:58:55.951331 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-b7ttc"] Jan 31 03:58:55 crc kubenswrapper[4827]: I0131 03:58:55.952667 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-b7ttc" Jan 31 03:58:55 crc kubenswrapper[4827]: I0131 03:58:55.966652 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-vxb8z"] Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.023697 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c97f"] Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.024551 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c97f" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.031107 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.031134 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-zxn24" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.031119 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.049942 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c97f"] Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.061929 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh9kr\" (UniqueName: \"kubernetes.io/projected/d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6-kube-api-access-nh9kr\") pod \"nmstate-handler-b7ttc\" (UID: \"d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6\") " pod="openshift-nmstate/nmstate-handler-b7ttc" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.062027 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0abf6fbb-878e-4f5f-99ef-969e12458804-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-blkw2\" (UID: \"0abf6fbb-878e-4f5f-99ef-969e12458804\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blkw2" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.062049 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6-ovs-socket\") pod \"nmstate-handler-b7ttc\" (UID: \"d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6\") " pod="openshift-nmstate/nmstate-handler-b7ttc" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.062067 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bqvt\" (UniqueName: \"kubernetes.io/projected/259273b1-36c1-4c94-846c-dd21b325059d-kube-api-access-4bqvt\") pod \"nmstate-metrics-54757c584b-vxb8z\" (UID: \"259273b1-36c1-4c94-846c-dd21b325059d\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-vxb8z" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.062089 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6-nmstate-lock\") pod \"nmstate-handler-b7ttc\" (UID: \"d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6\") " pod="openshift-nmstate/nmstate-handler-b7ttc" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.062103 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6-dbus-socket\") pod \"nmstate-handler-b7ttc\" (UID: \"d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6\") " pod="openshift-nmstate/nmstate-handler-b7ttc" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.062131 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjs4g\" (UniqueName: \"kubernetes.io/projected/0abf6fbb-878e-4f5f-99ef-969e12458804-kube-api-access-fjs4g\") pod \"nmstate-webhook-8474b5b9d8-blkw2\" (UID: \"0abf6fbb-878e-4f5f-99ef-969e12458804\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blkw2" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.163700 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6-dbus-socket\") pod \"nmstate-handler-b7ttc\" (UID: \"d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6\") " pod="openshift-nmstate/nmstate-handler-b7ttc" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.163750 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjs4g\" (UniqueName: \"kubernetes.io/projected/0abf6fbb-878e-4f5f-99ef-969e12458804-kube-api-access-fjs4g\") pod \"nmstate-webhook-8474b5b9d8-blkw2\" (UID: \"0abf6fbb-878e-4f5f-99ef-969e12458804\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blkw2" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.163789 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh9kr\" (UniqueName: \"kubernetes.io/projected/d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6-kube-api-access-nh9kr\") pod \"nmstate-handler-b7ttc\" (UID: \"d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6\") " pod="openshift-nmstate/nmstate-handler-b7ttc" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.164014 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-8c97f\" (UID: \"fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c97f" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.164154 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0abf6fbb-878e-4f5f-99ef-969e12458804-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-blkw2\" (UID: \"0abf6fbb-878e-4f5f-99ef-969e12458804\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blkw2" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.164164 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6-dbus-socket\") pod \"nmstate-handler-b7ttc\" (UID: \"d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6\") " pod="openshift-nmstate/nmstate-handler-b7ttc" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.164188 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-8c97f\" (UID: \"fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c97f" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.164230 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6-ovs-socket\") pod \"nmstate-handler-b7ttc\" (UID: \"d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6\") " pod="openshift-nmstate/nmstate-handler-b7ttc" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.164255 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6-ovs-socket\") pod \"nmstate-handler-b7ttc\" (UID: \"d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6\") " pod="openshift-nmstate/nmstate-handler-b7ttc" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.164270 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bqvt\" (UniqueName: \"kubernetes.io/projected/259273b1-36c1-4c94-846c-dd21b325059d-kube-api-access-4bqvt\") pod \"nmstate-metrics-54757c584b-vxb8z\" (UID: \"259273b1-36c1-4c94-846c-dd21b325059d\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-vxb8z" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.164311 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7hh7\" (UniqueName: \"kubernetes.io/projected/fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64-kube-api-access-d7hh7\") pod \"nmstate-console-plugin-7754f76f8b-8c97f\" (UID: \"fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c97f" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.164335 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6-nmstate-lock\") pod \"nmstate-handler-b7ttc\" (UID: \"d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6\") " pod="openshift-nmstate/nmstate-handler-b7ttc" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.164361 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6-nmstate-lock\") pod \"nmstate-handler-b7ttc\" (UID: \"d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6\") " pod="openshift-nmstate/nmstate-handler-b7ttc" Jan 31 03:58:56 crc kubenswrapper[4827]: E0131 03:58:56.164554 4827 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 31 03:58:56 crc kubenswrapper[4827]: E0131 03:58:56.164686 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0abf6fbb-878e-4f5f-99ef-969e12458804-tls-key-pair podName:0abf6fbb-878e-4f5f-99ef-969e12458804 nodeName:}" failed. No retries permitted until 2026-01-31 03:58:56.664666572 +0000 UTC m=+729.351747021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/0abf6fbb-878e-4f5f-99ef-969e12458804-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-blkw2" (UID: "0abf6fbb-878e-4f5f-99ef-969e12458804") : secret "openshift-nmstate-webhook" not found Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.186890 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh9kr\" (UniqueName: \"kubernetes.io/projected/d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6-kube-api-access-nh9kr\") pod \"nmstate-handler-b7ttc\" (UID: \"d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6\") " pod="openshift-nmstate/nmstate-handler-b7ttc" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.193674 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjs4g\" (UniqueName: \"kubernetes.io/projected/0abf6fbb-878e-4f5f-99ef-969e12458804-kube-api-access-fjs4g\") pod \"nmstate-webhook-8474b5b9d8-blkw2\" (UID: \"0abf6fbb-878e-4f5f-99ef-969e12458804\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blkw2" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.204767 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bqvt\" (UniqueName: \"kubernetes.io/projected/259273b1-36c1-4c94-846c-dd21b325059d-kube-api-access-4bqvt\") pod \"nmstate-metrics-54757c584b-vxb8z\" (UID: \"259273b1-36c1-4c94-846c-dd21b325059d\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-vxb8z" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.218193 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-vxb8z" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.241388 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-674895f487-vxmxh"] Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.242072 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.264161 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-674895f487-vxmxh"] Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.267058 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-8c97f\" (UID: \"fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c97f" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.267129 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-8c97f\" (UID: \"fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c97f" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.267163 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7hh7\" (UniqueName: \"kubernetes.io/projected/fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64-kube-api-access-d7hh7\") pod \"nmstate-console-plugin-7754f76f8b-8c97f\" (UID: \"fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c97f" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.268260 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-8c97f\" (UID: \"fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c97f" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.269108 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-b7ttc" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.279175 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-8c97f\" (UID: \"fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c97f" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.299091 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7hh7\" (UniqueName: \"kubernetes.io/projected/fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64-kube-api-access-d7hh7\") pod \"nmstate-console-plugin-7754f76f8b-8c97f\" (UID: \"fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c97f" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.339742 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c97f" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.368378 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jcmq\" (UniqueName: \"kubernetes.io/projected/ebc2c054-1444-4e01-aff0-18d7d3a23d03-kube-api-access-8jcmq\") pod \"console-674895f487-vxmxh\" (UID: \"ebc2c054-1444-4e01-aff0-18d7d3a23d03\") " pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.368428 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebc2c054-1444-4e01-aff0-18d7d3a23d03-console-serving-cert\") pod \"console-674895f487-vxmxh\" (UID: \"ebc2c054-1444-4e01-aff0-18d7d3a23d03\") " pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.368451 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebc2c054-1444-4e01-aff0-18d7d3a23d03-trusted-ca-bundle\") pod \"console-674895f487-vxmxh\" (UID: \"ebc2c054-1444-4e01-aff0-18d7d3a23d03\") " pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.368470 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ebc2c054-1444-4e01-aff0-18d7d3a23d03-console-config\") pod \"console-674895f487-vxmxh\" (UID: \"ebc2c054-1444-4e01-aff0-18d7d3a23d03\") " pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.368727 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ebc2c054-1444-4e01-aff0-18d7d3a23d03-console-oauth-config\") pod \"console-674895f487-vxmxh\" (UID: \"ebc2c054-1444-4e01-aff0-18d7d3a23d03\") " pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.368775 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ebc2c054-1444-4e01-aff0-18d7d3a23d03-service-ca\") pod \"console-674895f487-vxmxh\" (UID: \"ebc2c054-1444-4e01-aff0-18d7d3a23d03\") " pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.368919 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ebc2c054-1444-4e01-aff0-18d7d3a23d03-oauth-serving-cert\") pod \"console-674895f487-vxmxh\" (UID: \"ebc2c054-1444-4e01-aff0-18d7d3a23d03\") " pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.427038 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-vxb8z"] Jan 31 03:58:56 crc kubenswrapper[4827]: W0131 03:58:56.432784 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod259273b1_36c1_4c94_846c_dd21b325059d.slice/crio-56b656aab7beeb7c5fdd75f832e4d07246c131644a664bf44f418e84f6e0b1c6 WatchSource:0}: Error finding container 56b656aab7beeb7c5fdd75f832e4d07246c131644a664bf44f418e84f6e0b1c6: Status 404 returned error can't find the container with id 56b656aab7beeb7c5fdd75f832e4d07246c131644a664bf44f418e84f6e0b1c6 Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.470738 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ebc2c054-1444-4e01-aff0-18d7d3a23d03-console-oauth-config\") pod \"console-674895f487-vxmxh\" (UID: \"ebc2c054-1444-4e01-aff0-18d7d3a23d03\") " pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.470788 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ebc2c054-1444-4e01-aff0-18d7d3a23d03-service-ca\") pod \"console-674895f487-vxmxh\" (UID: \"ebc2c054-1444-4e01-aff0-18d7d3a23d03\") " pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.470825 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ebc2c054-1444-4e01-aff0-18d7d3a23d03-oauth-serving-cert\") pod \"console-674895f487-vxmxh\" (UID: \"ebc2c054-1444-4e01-aff0-18d7d3a23d03\") " pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.470895 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jcmq\" (UniqueName: \"kubernetes.io/projected/ebc2c054-1444-4e01-aff0-18d7d3a23d03-kube-api-access-8jcmq\") pod \"console-674895f487-vxmxh\" (UID: \"ebc2c054-1444-4e01-aff0-18d7d3a23d03\") " pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.470940 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebc2c054-1444-4e01-aff0-18d7d3a23d03-console-serving-cert\") pod \"console-674895f487-vxmxh\" (UID: \"ebc2c054-1444-4e01-aff0-18d7d3a23d03\") " pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.470970 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebc2c054-1444-4e01-aff0-18d7d3a23d03-trusted-ca-bundle\") pod \"console-674895f487-vxmxh\" (UID: \"ebc2c054-1444-4e01-aff0-18d7d3a23d03\") " pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.470996 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ebc2c054-1444-4e01-aff0-18d7d3a23d03-console-config\") pod \"console-674895f487-vxmxh\" (UID: \"ebc2c054-1444-4e01-aff0-18d7d3a23d03\") " pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.472338 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ebc2c054-1444-4e01-aff0-18d7d3a23d03-console-config\") pod \"console-674895f487-vxmxh\" (UID: \"ebc2c054-1444-4e01-aff0-18d7d3a23d03\") " pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.472351 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebc2c054-1444-4e01-aff0-18d7d3a23d03-trusted-ca-bundle\") pod \"console-674895f487-vxmxh\" (UID: \"ebc2c054-1444-4e01-aff0-18d7d3a23d03\") " pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.472373 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ebc2c054-1444-4e01-aff0-18d7d3a23d03-service-ca\") pod \"console-674895f487-vxmxh\" (UID: \"ebc2c054-1444-4e01-aff0-18d7d3a23d03\") " pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.473515 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ebc2c054-1444-4e01-aff0-18d7d3a23d03-oauth-serving-cert\") pod \"console-674895f487-vxmxh\" (UID: \"ebc2c054-1444-4e01-aff0-18d7d3a23d03\") " pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.474996 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebc2c054-1444-4e01-aff0-18d7d3a23d03-console-serving-cert\") pod \"console-674895f487-vxmxh\" (UID: \"ebc2c054-1444-4e01-aff0-18d7d3a23d03\") " pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.476274 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ebc2c054-1444-4e01-aff0-18d7d3a23d03-console-oauth-config\") pod \"console-674895f487-vxmxh\" (UID: \"ebc2c054-1444-4e01-aff0-18d7d3a23d03\") " pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.490750 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jcmq\" (UniqueName: \"kubernetes.io/projected/ebc2c054-1444-4e01-aff0-18d7d3a23d03-kube-api-access-8jcmq\") pod \"console-674895f487-vxmxh\" (UID: \"ebc2c054-1444-4e01-aff0-18d7d3a23d03\") " pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.542752 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c97f"] Jan 31 03:58:56 crc kubenswrapper[4827]: W0131 03:58:56.547850 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd972f7a_fbf4_449b_b1d2_59d0dbe4aa64.slice/crio-40e7171a033250d601fa77b1204c98bcc81b5f2a1a62c3a11e1451fe39dd47e6 WatchSource:0}: Error finding container 40e7171a033250d601fa77b1204c98bcc81b5f2a1a62c3a11e1451fe39dd47e6: Status 404 returned error can't find the container with id 40e7171a033250d601fa77b1204c98bcc81b5f2a1a62c3a11e1451fe39dd47e6 Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.602262 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.673828 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0abf6fbb-878e-4f5f-99ef-969e12458804-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-blkw2\" (UID: \"0abf6fbb-878e-4f5f-99ef-969e12458804\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blkw2" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.676554 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0abf6fbb-878e-4f5f-99ef-969e12458804-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-blkw2\" (UID: \"0abf6fbb-878e-4f5f-99ef-969e12458804\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blkw2" Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.776113 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-674895f487-vxmxh"] Jan 31 03:58:56 crc kubenswrapper[4827]: I0131 03:58:56.827213 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blkw2" Jan 31 03:58:57 crc kubenswrapper[4827]: I0131 03:58:57.068509 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-blkw2"] Jan 31 03:58:57 crc kubenswrapper[4827]: W0131 03:58:57.076181 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0abf6fbb_878e_4f5f_99ef_969e12458804.slice/crio-61cae748c5757d0a92835e1faaf564c6b3a8d293cb73ab181c9a6476971ee3d8 WatchSource:0}: Error finding container 61cae748c5757d0a92835e1faaf564c6b3a8d293cb73ab181c9a6476971ee3d8: Status 404 returned error can't find the container with id 61cae748c5757d0a92835e1faaf564c6b3a8d293cb73ab181c9a6476971ee3d8 Jan 31 03:58:57 crc kubenswrapper[4827]: I0131 03:58:57.209266 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-b7ttc" event={"ID":"d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6","Type":"ContainerStarted","Data":"a2a4d03e55935e31987985e671f742f5324a0d4aa9e243608091998704af5ffc"} Jan 31 03:58:57 crc kubenswrapper[4827]: I0131 03:58:57.210728 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-vxb8z" event={"ID":"259273b1-36c1-4c94-846c-dd21b325059d","Type":"ContainerStarted","Data":"56b656aab7beeb7c5fdd75f832e4d07246c131644a664bf44f418e84f6e0b1c6"} Jan 31 03:58:57 crc kubenswrapper[4827]: I0131 03:58:57.212440 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c97f" event={"ID":"fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64","Type":"ContainerStarted","Data":"40e7171a033250d601fa77b1204c98bcc81b5f2a1a62c3a11e1451fe39dd47e6"} Jan 31 03:58:57 crc kubenswrapper[4827]: I0131 03:58:57.213868 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blkw2" event={"ID":"0abf6fbb-878e-4f5f-99ef-969e12458804","Type":"ContainerStarted","Data":"61cae748c5757d0a92835e1faaf564c6b3a8d293cb73ab181c9a6476971ee3d8"} Jan 31 03:58:57 crc kubenswrapper[4827]: I0131 03:58:57.215592 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-674895f487-vxmxh" event={"ID":"ebc2c054-1444-4e01-aff0-18d7d3a23d03","Type":"ContainerStarted","Data":"72cc123854e9833314402a81c3ef49d506ad15f2670bcf70f8ac3a90213d8070"} Jan 31 03:58:57 crc kubenswrapper[4827]: I0131 03:58:57.215623 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-674895f487-vxmxh" event={"ID":"ebc2c054-1444-4e01-aff0-18d7d3a23d03","Type":"ContainerStarted","Data":"0c6fe92fc50429c0c4f18abbf5e0c52590abed4ede17c2734d75f37b74fc484c"} Jan 31 03:58:57 crc kubenswrapper[4827]: I0131 03:58:57.232480 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-674895f487-vxmxh" podStartSLOduration=1.232461638 podStartE2EDuration="1.232461638s" podCreationTimestamp="2026-01-31 03:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:58:57.230723065 +0000 UTC m=+729.917803524" watchObservedRunningTime="2026-01-31 03:58:57.232461638 +0000 UTC m=+729.919542087" Jan 31 03:59:00 crc kubenswrapper[4827]: I0131 03:59:00.240738 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-b7ttc" event={"ID":"d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6","Type":"ContainerStarted","Data":"81a7b87c616a838b178449d6caa0141667efb7f74489ff2f77da862d9869abf7"} Jan 31 03:59:00 crc kubenswrapper[4827]: I0131 03:59:00.241418 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-b7ttc" Jan 31 03:59:00 crc kubenswrapper[4827]: I0131 03:59:00.242890 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-vxb8z" event={"ID":"259273b1-36c1-4c94-846c-dd21b325059d","Type":"ContainerStarted","Data":"d2181e5d88c377a2712e88e454282a7562ee7349f6de79d78346085b130f17cd"} Jan 31 03:59:00 crc kubenswrapper[4827]: I0131 03:59:00.244385 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c97f" event={"ID":"fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64","Type":"ContainerStarted","Data":"62a2268441fe1427486dd340d8731ae32b8e99e74cfefefb51a4077794bfc615"} Jan 31 03:59:00 crc kubenswrapper[4827]: I0131 03:59:00.245958 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blkw2" event={"ID":"0abf6fbb-878e-4f5f-99ef-969e12458804","Type":"ContainerStarted","Data":"919c804e7d9247f283137f9773b66010c8a19a980a89edf7ee8e47a231c7a61b"} Jan 31 03:59:00 crc kubenswrapper[4827]: I0131 03:59:00.246461 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blkw2" Jan 31 03:59:00 crc kubenswrapper[4827]: I0131 03:59:00.264087 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-b7ttc" podStartSLOduration=2.123939402 podStartE2EDuration="5.264069649s" podCreationTimestamp="2026-01-31 03:58:55 +0000 UTC" firstStartedPulling="2026-01-31 03:58:56.293208413 +0000 UTC m=+728.980288862" lastFinishedPulling="2026-01-31 03:58:59.43333866 +0000 UTC m=+732.120419109" observedRunningTime="2026-01-31 03:59:00.258454666 +0000 UTC m=+732.945535145" watchObservedRunningTime="2026-01-31 03:59:00.264069649 +0000 UTC m=+732.951150098" Jan 31 03:59:00 crc kubenswrapper[4827]: I0131 03:59:00.279599 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blkw2" podStartSLOduration=2.9354110589999998 podStartE2EDuration="5.279577076s" podCreationTimestamp="2026-01-31 03:58:55 +0000 UTC" firstStartedPulling="2026-01-31 03:58:57.079339311 +0000 UTC m=+729.766419770" lastFinishedPulling="2026-01-31 03:58:59.423505328 +0000 UTC m=+732.110585787" observedRunningTime="2026-01-31 03:59:00.272788507 +0000 UTC m=+732.959868966" watchObservedRunningTime="2026-01-31 03:59:00.279577076 +0000 UTC m=+732.966657525" Jan 31 03:59:00 crc kubenswrapper[4827]: I0131 03:59:00.288575 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c97f" podStartSLOduration=1.415423223 podStartE2EDuration="4.288546762s" podCreationTimestamp="2026-01-31 03:58:56 +0000 UTC" firstStartedPulling="2026-01-31 03:58:56.550281116 +0000 UTC m=+729.237361565" lastFinishedPulling="2026-01-31 03:58:59.423404635 +0000 UTC m=+732.110485104" observedRunningTime="2026-01-31 03:59:00.286796258 +0000 UTC m=+732.973876727" watchObservedRunningTime="2026-01-31 03:59:00.288546762 +0000 UTC m=+732.975627251" Jan 31 03:59:02 crc kubenswrapper[4827]: I0131 03:59:02.259048 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-vxb8z" event={"ID":"259273b1-36c1-4c94-846c-dd21b325059d","Type":"ContainerStarted","Data":"467fabac5ff9db42953640c4a160eb174e18cddaa1fdd030821138d5e8282d60"} Jan 31 03:59:02 crc kubenswrapper[4827]: I0131 03:59:02.287260 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-vxb8z" podStartSLOduration=2.158494496 podStartE2EDuration="7.287223038s" podCreationTimestamp="2026-01-31 03:58:55 +0000 UTC" firstStartedPulling="2026-01-31 03:58:56.435056034 +0000 UTC m=+729.122136483" lastFinishedPulling="2026-01-31 03:59:01.563784566 +0000 UTC m=+734.250865025" observedRunningTime="2026-01-31 03:59:02.279363335 +0000 UTC m=+734.966443784" watchObservedRunningTime="2026-01-31 03:59:02.287223038 +0000 UTC m=+734.974303527" Jan 31 03:59:06 crc kubenswrapper[4827]: I0131 03:59:06.307682 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-b7ttc" Jan 31 03:59:06 crc kubenswrapper[4827]: I0131 03:59:06.603190 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:59:06 crc kubenswrapper[4827]: I0131 03:59:06.603251 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:59:06 crc kubenswrapper[4827]: I0131 03:59:06.608959 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:59:07 crc kubenswrapper[4827]: I0131 03:59:07.298115 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-674895f487-vxmxh" Jan 31 03:59:07 crc kubenswrapper[4827]: I0131 03:59:07.369289 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-q4hqs"] Jan 31 03:59:16 crc kubenswrapper[4827]: I0131 03:59:16.836957 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blkw2" Jan 31 03:59:17 crc kubenswrapper[4827]: I0131 03:59:17.371312 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 03:59:17 crc kubenswrapper[4827]: I0131 03:59:17.371404 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 03:59:27 crc kubenswrapper[4827]: I0131 03:59:27.143533 4827 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 03:59:29 crc kubenswrapper[4827]: I0131 03:59:29.932125 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs"] Jan 31 03:59:29 crc kubenswrapper[4827]: I0131 03:59:29.934959 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs" Jan 31 03:59:29 crc kubenswrapper[4827]: I0131 03:59:29.937633 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 03:59:29 crc kubenswrapper[4827]: I0131 03:59:29.965019 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs"] Jan 31 03:59:30 crc kubenswrapper[4827]: I0131 03:59:30.080629 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfzd8\" (UniqueName: \"kubernetes.io/projected/4f7eae5f-3ee4-478f-928c-ee25fab2d488-kube-api-access-lfzd8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs\" (UID: \"4f7eae5f-3ee4-478f-928c-ee25fab2d488\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs" Jan 31 03:59:30 crc kubenswrapper[4827]: I0131 03:59:30.080699 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f7eae5f-3ee4-478f-928c-ee25fab2d488-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs\" (UID: \"4f7eae5f-3ee4-478f-928c-ee25fab2d488\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs" Jan 31 03:59:30 crc kubenswrapper[4827]: I0131 03:59:30.080741 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f7eae5f-3ee4-478f-928c-ee25fab2d488-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs\" (UID: \"4f7eae5f-3ee4-478f-928c-ee25fab2d488\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs" Jan 31 03:59:30 crc kubenswrapper[4827]: I0131 03:59:30.181725 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f7eae5f-3ee4-478f-928c-ee25fab2d488-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs\" (UID: \"4f7eae5f-3ee4-478f-928c-ee25fab2d488\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs" Jan 31 03:59:30 crc kubenswrapper[4827]: I0131 03:59:30.181808 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfzd8\" (UniqueName: \"kubernetes.io/projected/4f7eae5f-3ee4-478f-928c-ee25fab2d488-kube-api-access-lfzd8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs\" (UID: \"4f7eae5f-3ee4-478f-928c-ee25fab2d488\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs" Jan 31 03:59:30 crc kubenswrapper[4827]: I0131 03:59:30.181836 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f7eae5f-3ee4-478f-928c-ee25fab2d488-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs\" (UID: \"4f7eae5f-3ee4-478f-928c-ee25fab2d488\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs" Jan 31 03:59:30 crc kubenswrapper[4827]: I0131 03:59:30.182231 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f7eae5f-3ee4-478f-928c-ee25fab2d488-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs\" (UID: \"4f7eae5f-3ee4-478f-928c-ee25fab2d488\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs" Jan 31 03:59:30 crc kubenswrapper[4827]: I0131 03:59:30.182271 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f7eae5f-3ee4-478f-928c-ee25fab2d488-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs\" (UID: \"4f7eae5f-3ee4-478f-928c-ee25fab2d488\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs" Jan 31 03:59:30 crc kubenswrapper[4827]: I0131 03:59:30.201029 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfzd8\" (UniqueName: \"kubernetes.io/projected/4f7eae5f-3ee4-478f-928c-ee25fab2d488-kube-api-access-lfzd8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs\" (UID: \"4f7eae5f-3ee4-478f-928c-ee25fab2d488\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs" Jan 31 03:59:30 crc kubenswrapper[4827]: I0131 03:59:30.253258 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs" Jan 31 03:59:30 crc kubenswrapper[4827]: I0131 03:59:30.644380 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs"] Jan 31 03:59:31 crc kubenswrapper[4827]: I0131 03:59:31.450284 4827 generic.go:334] "Generic (PLEG): container finished" podID="4f7eae5f-3ee4-478f-928c-ee25fab2d488" containerID="b56af283e36f22a9ddca10c5f0c69d038be80247d8c955e931db45b26a34fe2e" exitCode=0 Jan 31 03:59:31 crc kubenswrapper[4827]: I0131 03:59:31.450345 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs" event={"ID":"4f7eae5f-3ee4-478f-928c-ee25fab2d488","Type":"ContainerDied","Data":"b56af283e36f22a9ddca10c5f0c69d038be80247d8c955e931db45b26a34fe2e"} Jan 31 03:59:31 crc kubenswrapper[4827]: I0131 03:59:31.450382 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs" event={"ID":"4f7eae5f-3ee4-478f-928c-ee25fab2d488","Type":"ContainerStarted","Data":"4052b6cb0563045e9370848d08c7302d84b473242d89e30e994925f71e5618ce"} Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.283479 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vpg89"] Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.286411 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpg89" Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.300158 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vpg89"] Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.419056 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnt6s\" (UniqueName: \"kubernetes.io/projected/aa2d2648-f614-4da5-a5e6-f085c896d851-kube-api-access-hnt6s\") pod \"redhat-operators-vpg89\" (UID: \"aa2d2648-f614-4da5-a5e6-f085c896d851\") " pod="openshift-marketplace/redhat-operators-vpg89" Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.419148 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa2d2648-f614-4da5-a5e6-f085c896d851-catalog-content\") pod \"redhat-operators-vpg89\" (UID: \"aa2d2648-f614-4da5-a5e6-f085c896d851\") " pod="openshift-marketplace/redhat-operators-vpg89" Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.419160 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-q4hqs" podUID="a2a52a00-75ce-4094-bab7-913d6fbab1dc" containerName="console" containerID="cri-o://0301e9ef087e36a3339397dfb581970f85739dafaf3b34ae619f3b8b63d4cbbf" gracePeriod=15 Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.419241 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa2d2648-f614-4da5-a5e6-f085c896d851-utilities\") pod \"redhat-operators-vpg89\" (UID: \"aa2d2648-f614-4da5-a5e6-f085c896d851\") " pod="openshift-marketplace/redhat-operators-vpg89" Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.519969 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa2d2648-f614-4da5-a5e6-f085c896d851-catalog-content\") pod \"redhat-operators-vpg89\" (UID: \"aa2d2648-f614-4da5-a5e6-f085c896d851\") " pod="openshift-marketplace/redhat-operators-vpg89" Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.520073 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa2d2648-f614-4da5-a5e6-f085c896d851-utilities\") pod \"redhat-operators-vpg89\" (UID: \"aa2d2648-f614-4da5-a5e6-f085c896d851\") " pod="openshift-marketplace/redhat-operators-vpg89" Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.520125 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnt6s\" (UniqueName: \"kubernetes.io/projected/aa2d2648-f614-4da5-a5e6-f085c896d851-kube-api-access-hnt6s\") pod \"redhat-operators-vpg89\" (UID: \"aa2d2648-f614-4da5-a5e6-f085c896d851\") " pod="openshift-marketplace/redhat-operators-vpg89" Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.520619 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa2d2648-f614-4da5-a5e6-f085c896d851-utilities\") pod \"redhat-operators-vpg89\" (UID: \"aa2d2648-f614-4da5-a5e6-f085c896d851\") " pod="openshift-marketplace/redhat-operators-vpg89" Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.520714 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa2d2648-f614-4da5-a5e6-f085c896d851-catalog-content\") pod \"redhat-operators-vpg89\" (UID: \"aa2d2648-f614-4da5-a5e6-f085c896d851\") " pod="openshift-marketplace/redhat-operators-vpg89" Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.545435 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnt6s\" (UniqueName: \"kubernetes.io/projected/aa2d2648-f614-4da5-a5e6-f085c896d851-kube-api-access-hnt6s\") pod \"redhat-operators-vpg89\" (UID: \"aa2d2648-f614-4da5-a5e6-f085c896d851\") " pod="openshift-marketplace/redhat-operators-vpg89" Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.612686 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpg89" Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.788701 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-q4hqs_a2a52a00-75ce-4094-bab7-913d6fbab1dc/console/0.log" Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.789027 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.851064 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vpg89"] Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.927179 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj7b4\" (UniqueName: \"kubernetes.io/projected/a2a52a00-75ce-4094-bab7-913d6fbab1dc-kube-api-access-rj7b4\") pod \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.927251 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a2a52a00-75ce-4094-bab7-913d6fbab1dc-console-oauth-config\") pod \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.927343 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-trusted-ca-bundle\") pod \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.927412 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2a52a00-75ce-4094-bab7-913d6fbab1dc-console-serving-cert\") pod \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.927460 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-service-ca\") pod \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.927486 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-oauth-serving-cert\") pod \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.927527 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-console-config\") pod \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\" (UID: \"a2a52a00-75ce-4094-bab7-913d6fbab1dc\") " Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.928549 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a2a52a00-75ce-4094-bab7-913d6fbab1dc" (UID: "a2a52a00-75ce-4094-bab7-913d6fbab1dc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.928568 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-service-ca" (OuterVolumeSpecName: "service-ca") pod "a2a52a00-75ce-4094-bab7-913d6fbab1dc" (UID: "a2a52a00-75ce-4094-bab7-913d6fbab1dc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.928589 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a2a52a00-75ce-4094-bab7-913d6fbab1dc" (UID: "a2a52a00-75ce-4094-bab7-913d6fbab1dc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.930828 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-console-config" (OuterVolumeSpecName: "console-config") pod "a2a52a00-75ce-4094-bab7-913d6fbab1dc" (UID: "a2a52a00-75ce-4094-bab7-913d6fbab1dc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.932267 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2a52a00-75ce-4094-bab7-913d6fbab1dc-kube-api-access-rj7b4" (OuterVolumeSpecName: "kube-api-access-rj7b4") pod "a2a52a00-75ce-4094-bab7-913d6fbab1dc" (UID: "a2a52a00-75ce-4094-bab7-913d6fbab1dc"). InnerVolumeSpecName "kube-api-access-rj7b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.932621 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2a52a00-75ce-4094-bab7-913d6fbab1dc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a2a52a00-75ce-4094-bab7-913d6fbab1dc" (UID: "a2a52a00-75ce-4094-bab7-913d6fbab1dc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:59:32 crc kubenswrapper[4827]: I0131 03:59:32.933894 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2a52a00-75ce-4094-bab7-913d6fbab1dc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a2a52a00-75ce-4094-bab7-913d6fbab1dc" (UID: "a2a52a00-75ce-4094-bab7-913d6fbab1dc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.028615 4827 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.028652 4827 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.028664 4827 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.028674 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj7b4\" (UniqueName: \"kubernetes.io/projected/a2a52a00-75ce-4094-bab7-913d6fbab1dc-kube-api-access-rj7b4\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.028684 4827 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a2a52a00-75ce-4094-bab7-913d6fbab1dc-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.028692 4827 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2a52a00-75ce-4094-bab7-913d6fbab1dc-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.028700 4827 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2a52a00-75ce-4094-bab7-913d6fbab1dc-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.462391 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-q4hqs_a2a52a00-75ce-4094-bab7-913d6fbab1dc/console/0.log" Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.462748 4827 generic.go:334] "Generic (PLEG): container finished" podID="a2a52a00-75ce-4094-bab7-913d6fbab1dc" containerID="0301e9ef087e36a3339397dfb581970f85739dafaf3b34ae619f3b8b63d4cbbf" exitCode=2 Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.462802 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-q4hqs" Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.462867 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-q4hqs" event={"ID":"a2a52a00-75ce-4094-bab7-913d6fbab1dc","Type":"ContainerDied","Data":"0301e9ef087e36a3339397dfb581970f85739dafaf3b34ae619f3b8b63d4cbbf"} Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.462972 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-q4hqs" event={"ID":"a2a52a00-75ce-4094-bab7-913d6fbab1dc","Type":"ContainerDied","Data":"6473b08890af801202ef6d128d62500bf5cf49e294b26fb8f3a068b47f11ac35"} Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.462994 4827 scope.go:117] "RemoveContainer" containerID="0301e9ef087e36a3339397dfb581970f85739dafaf3b34ae619f3b8b63d4cbbf" Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.465273 4827 generic.go:334] "Generic (PLEG): container finished" podID="4f7eae5f-3ee4-478f-928c-ee25fab2d488" containerID="b2bc1eb0b482593369ebf127c9094e6229bfe19393d484805413ea8ee83ca968" exitCode=0 Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.465342 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs" event={"ID":"4f7eae5f-3ee4-478f-928c-ee25fab2d488","Type":"ContainerDied","Data":"b2bc1eb0b482593369ebf127c9094e6229bfe19393d484805413ea8ee83ca968"} Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.467169 4827 generic.go:334] "Generic (PLEG): container finished" podID="aa2d2648-f614-4da5-a5e6-f085c896d851" containerID="9eae97a0da488780c2b4fe7b1916a176dc614d64fb33c6aaf238677b6d09d070" exitCode=0 Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.467209 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpg89" event={"ID":"aa2d2648-f614-4da5-a5e6-f085c896d851","Type":"ContainerDied","Data":"9eae97a0da488780c2b4fe7b1916a176dc614d64fb33c6aaf238677b6d09d070"} Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.467239 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpg89" event={"ID":"aa2d2648-f614-4da5-a5e6-f085c896d851","Type":"ContainerStarted","Data":"95fc87c3f71efdedc8e14f50052ba2619904eed27403f42220b8a5e689116e12"} Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.497173 4827 scope.go:117] "RemoveContainer" containerID="0301e9ef087e36a3339397dfb581970f85739dafaf3b34ae619f3b8b63d4cbbf" Jan 31 03:59:33 crc kubenswrapper[4827]: E0131 03:59:33.497706 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0301e9ef087e36a3339397dfb581970f85739dafaf3b34ae619f3b8b63d4cbbf\": container with ID starting with 0301e9ef087e36a3339397dfb581970f85739dafaf3b34ae619f3b8b63d4cbbf not found: ID does not exist" containerID="0301e9ef087e36a3339397dfb581970f85739dafaf3b34ae619f3b8b63d4cbbf" Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.497751 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0301e9ef087e36a3339397dfb581970f85739dafaf3b34ae619f3b8b63d4cbbf"} err="failed to get container status \"0301e9ef087e36a3339397dfb581970f85739dafaf3b34ae619f3b8b63d4cbbf\": rpc error: code = NotFound desc = could not find container \"0301e9ef087e36a3339397dfb581970f85739dafaf3b34ae619f3b8b63d4cbbf\": container with ID starting with 0301e9ef087e36a3339397dfb581970f85739dafaf3b34ae619f3b8b63d4cbbf not found: ID does not exist" Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.539928 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-q4hqs"] Jan 31 03:59:33 crc kubenswrapper[4827]: I0131 03:59:33.544124 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-q4hqs"] Jan 31 03:59:34 crc kubenswrapper[4827]: I0131 03:59:34.118687 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2a52a00-75ce-4094-bab7-913d6fbab1dc" path="/var/lib/kubelet/pods/a2a52a00-75ce-4094-bab7-913d6fbab1dc/volumes" Jan 31 03:59:34 crc kubenswrapper[4827]: I0131 03:59:34.481517 4827 generic.go:334] "Generic (PLEG): container finished" podID="4f7eae5f-3ee4-478f-928c-ee25fab2d488" containerID="afd49b87ac12f4d6d4101660c68b59f58e092d7ce7081dcc72dc34d83ea9dc05" exitCode=0 Jan 31 03:59:34 crc kubenswrapper[4827]: I0131 03:59:34.482326 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs" event={"ID":"4f7eae5f-3ee4-478f-928c-ee25fab2d488","Type":"ContainerDied","Data":"afd49b87ac12f4d6d4101660c68b59f58e092d7ce7081dcc72dc34d83ea9dc05"} Jan 31 03:59:34 crc kubenswrapper[4827]: I0131 03:59:34.484526 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpg89" event={"ID":"aa2d2648-f614-4da5-a5e6-f085c896d851","Type":"ContainerStarted","Data":"b08c3f0c69c4b2a08ba46470e607b8e041404e7a52f6a0e638212c18308c4feb"} Jan 31 03:59:35 crc kubenswrapper[4827]: I0131 03:59:35.492946 4827 generic.go:334] "Generic (PLEG): container finished" podID="aa2d2648-f614-4da5-a5e6-f085c896d851" containerID="b08c3f0c69c4b2a08ba46470e607b8e041404e7a52f6a0e638212c18308c4feb" exitCode=0 Jan 31 03:59:35 crc kubenswrapper[4827]: I0131 03:59:35.492999 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpg89" event={"ID":"aa2d2648-f614-4da5-a5e6-f085c896d851","Type":"ContainerDied","Data":"b08c3f0c69c4b2a08ba46470e607b8e041404e7a52f6a0e638212c18308c4feb"} Jan 31 03:59:35 crc kubenswrapper[4827]: I0131 03:59:35.796287 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs" Jan 31 03:59:35 crc kubenswrapper[4827]: I0131 03:59:35.971381 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f7eae5f-3ee4-478f-928c-ee25fab2d488-bundle\") pod \"4f7eae5f-3ee4-478f-928c-ee25fab2d488\" (UID: \"4f7eae5f-3ee4-478f-928c-ee25fab2d488\") " Jan 31 03:59:35 crc kubenswrapper[4827]: I0131 03:59:35.971825 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfzd8\" (UniqueName: \"kubernetes.io/projected/4f7eae5f-3ee4-478f-928c-ee25fab2d488-kube-api-access-lfzd8\") pod \"4f7eae5f-3ee4-478f-928c-ee25fab2d488\" (UID: \"4f7eae5f-3ee4-478f-928c-ee25fab2d488\") " Jan 31 03:59:35 crc kubenswrapper[4827]: I0131 03:59:35.971903 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f7eae5f-3ee4-478f-928c-ee25fab2d488-util\") pod \"4f7eae5f-3ee4-478f-928c-ee25fab2d488\" (UID: \"4f7eae5f-3ee4-478f-928c-ee25fab2d488\") " Jan 31 03:59:35 crc kubenswrapper[4827]: I0131 03:59:35.973499 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f7eae5f-3ee4-478f-928c-ee25fab2d488-bundle" (OuterVolumeSpecName: "bundle") pod "4f7eae5f-3ee4-478f-928c-ee25fab2d488" (UID: "4f7eae5f-3ee4-478f-928c-ee25fab2d488"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:59:35 crc kubenswrapper[4827]: I0131 03:59:35.978576 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f7eae5f-3ee4-478f-928c-ee25fab2d488-kube-api-access-lfzd8" (OuterVolumeSpecName: "kube-api-access-lfzd8") pod "4f7eae5f-3ee4-478f-928c-ee25fab2d488" (UID: "4f7eae5f-3ee4-478f-928c-ee25fab2d488"). InnerVolumeSpecName "kube-api-access-lfzd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:59:35 crc kubenswrapper[4827]: I0131 03:59:35.996913 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f7eae5f-3ee4-478f-928c-ee25fab2d488-util" (OuterVolumeSpecName: "util") pod "4f7eae5f-3ee4-478f-928c-ee25fab2d488" (UID: "4f7eae5f-3ee4-478f-928c-ee25fab2d488"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:59:36 crc kubenswrapper[4827]: I0131 03:59:36.073201 4827 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f7eae5f-3ee4-478f-928c-ee25fab2d488-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:36 crc kubenswrapper[4827]: I0131 03:59:36.073267 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfzd8\" (UniqueName: \"kubernetes.io/projected/4f7eae5f-3ee4-478f-928c-ee25fab2d488-kube-api-access-lfzd8\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:36 crc kubenswrapper[4827]: I0131 03:59:36.073283 4827 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f7eae5f-3ee4-478f-928c-ee25fab2d488-util\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:36 crc kubenswrapper[4827]: I0131 03:59:36.507141 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs" event={"ID":"4f7eae5f-3ee4-478f-928c-ee25fab2d488","Type":"ContainerDied","Data":"4052b6cb0563045e9370848d08c7302d84b473242d89e30e994925f71e5618ce"} Jan 31 03:59:36 crc kubenswrapper[4827]: I0131 03:59:36.507207 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs" Jan 31 03:59:36 crc kubenswrapper[4827]: I0131 03:59:36.507221 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4052b6cb0563045e9370848d08c7302d84b473242d89e30e994925f71e5618ce" Jan 31 03:59:36 crc kubenswrapper[4827]: I0131 03:59:36.510946 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpg89" event={"ID":"aa2d2648-f614-4da5-a5e6-f085c896d851","Type":"ContainerStarted","Data":"f5e7f88259fba084faeeb2ba7a56c3ddf064972b0b187a68505c2fe50e93f4f8"} Jan 31 03:59:36 crc kubenswrapper[4827]: I0131 03:59:36.547966 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vpg89" podStartSLOduration=2.076203657 podStartE2EDuration="4.547940305s" podCreationTimestamp="2026-01-31 03:59:32 +0000 UTC" firstStartedPulling="2026-01-31 03:59:33.46837635 +0000 UTC m=+766.155456809" lastFinishedPulling="2026-01-31 03:59:35.940113008 +0000 UTC m=+768.627193457" observedRunningTime="2026-01-31 03:59:36.53804049 +0000 UTC m=+769.225121019" watchObservedRunningTime="2026-01-31 03:59:36.547940305 +0000 UTC m=+769.235020784" Jan 31 03:59:42 crc kubenswrapper[4827]: I0131 03:59:42.613775 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vpg89" Jan 31 03:59:42 crc kubenswrapper[4827]: I0131 03:59:42.616359 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vpg89" Jan 31 03:59:43 crc kubenswrapper[4827]: I0131 03:59:43.671755 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vpg89" podUID="aa2d2648-f614-4da5-a5e6-f085c896d851" containerName="registry-server" probeResult="failure" output=< Jan 31 03:59:43 crc kubenswrapper[4827]: timeout: failed to connect service ":50051" within 1s Jan 31 03:59:43 crc kubenswrapper[4827]: > Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.350198 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5dfffc88b-rknwp"] Jan 31 03:59:45 crc kubenswrapper[4827]: E0131 03:59:45.350423 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f7eae5f-3ee4-478f-928c-ee25fab2d488" containerName="extract" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.350435 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f7eae5f-3ee4-478f-928c-ee25fab2d488" containerName="extract" Jan 31 03:59:45 crc kubenswrapper[4827]: E0131 03:59:45.350449 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f7eae5f-3ee4-478f-928c-ee25fab2d488" containerName="util" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.350455 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f7eae5f-3ee4-478f-928c-ee25fab2d488" containerName="util" Jan 31 03:59:45 crc kubenswrapper[4827]: E0131 03:59:45.350464 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a52a00-75ce-4094-bab7-913d6fbab1dc" containerName="console" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.350470 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a52a00-75ce-4094-bab7-913d6fbab1dc" containerName="console" Jan 31 03:59:45 crc kubenswrapper[4827]: E0131 03:59:45.350480 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f7eae5f-3ee4-478f-928c-ee25fab2d488" containerName="pull" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.350486 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f7eae5f-3ee4-478f-928c-ee25fab2d488" containerName="pull" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.350579 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2a52a00-75ce-4094-bab7-913d6fbab1dc" containerName="console" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.350591 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f7eae5f-3ee4-478f-928c-ee25fab2d488" containerName="extract" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.351044 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5dfffc88b-rknwp" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.353554 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.353616 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.356343 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.356673 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-7xj7q" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.357099 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.369781 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5dfffc88b-rknwp"] Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.506123 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ab6e6231-c7d2-4c65-89d2-bd6771c99585-apiservice-cert\") pod \"metallb-operator-controller-manager-5dfffc88b-rknwp\" (UID: \"ab6e6231-c7d2-4c65-89d2-bd6771c99585\") " pod="metallb-system/metallb-operator-controller-manager-5dfffc88b-rknwp" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.506179 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ab6e6231-c7d2-4c65-89d2-bd6771c99585-webhook-cert\") pod \"metallb-operator-controller-manager-5dfffc88b-rknwp\" (UID: \"ab6e6231-c7d2-4c65-89d2-bd6771c99585\") " pod="metallb-system/metallb-operator-controller-manager-5dfffc88b-rknwp" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.506302 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th64z\" (UniqueName: \"kubernetes.io/projected/ab6e6231-c7d2-4c65-89d2-bd6771c99585-kube-api-access-th64z\") pod \"metallb-operator-controller-manager-5dfffc88b-rknwp\" (UID: \"ab6e6231-c7d2-4c65-89d2-bd6771c99585\") " pod="metallb-system/metallb-operator-controller-manager-5dfffc88b-rknwp" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.593809 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6997fd6b6c-rxw9p"] Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.594644 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6997fd6b6c-rxw9p" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.598556 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.598583 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.598957 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-pkh52" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.607265 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th64z\" (UniqueName: \"kubernetes.io/projected/ab6e6231-c7d2-4c65-89d2-bd6771c99585-kube-api-access-th64z\") pod \"metallb-operator-controller-manager-5dfffc88b-rknwp\" (UID: \"ab6e6231-c7d2-4c65-89d2-bd6771c99585\") " pod="metallb-system/metallb-operator-controller-manager-5dfffc88b-rknwp" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.607323 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d798k\" (UniqueName: \"kubernetes.io/projected/cac01594-063e-4099-b7fc-11e5d034cd2c-kube-api-access-d798k\") pod \"metallb-operator-webhook-server-6997fd6b6c-rxw9p\" (UID: \"cac01594-063e-4099-b7fc-11e5d034cd2c\") " pod="metallb-system/metallb-operator-webhook-server-6997fd6b6c-rxw9p" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.607350 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cac01594-063e-4099-b7fc-11e5d034cd2c-apiservice-cert\") pod \"metallb-operator-webhook-server-6997fd6b6c-rxw9p\" (UID: \"cac01594-063e-4099-b7fc-11e5d034cd2c\") " pod="metallb-system/metallb-operator-webhook-server-6997fd6b6c-rxw9p" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.607414 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ab6e6231-c7d2-4c65-89d2-bd6771c99585-apiservice-cert\") pod \"metallb-operator-controller-manager-5dfffc88b-rknwp\" (UID: \"ab6e6231-c7d2-4c65-89d2-bd6771c99585\") " pod="metallb-system/metallb-operator-controller-manager-5dfffc88b-rknwp" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.607439 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cac01594-063e-4099-b7fc-11e5d034cd2c-webhook-cert\") pod \"metallb-operator-webhook-server-6997fd6b6c-rxw9p\" (UID: \"cac01594-063e-4099-b7fc-11e5d034cd2c\") " pod="metallb-system/metallb-operator-webhook-server-6997fd6b6c-rxw9p" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.607469 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ab6e6231-c7d2-4c65-89d2-bd6771c99585-webhook-cert\") pod \"metallb-operator-controller-manager-5dfffc88b-rknwp\" (UID: \"ab6e6231-c7d2-4c65-89d2-bd6771c99585\") " pod="metallb-system/metallb-operator-controller-manager-5dfffc88b-rknwp" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.615586 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ab6e6231-c7d2-4c65-89d2-bd6771c99585-webhook-cert\") pod \"metallb-operator-controller-manager-5dfffc88b-rknwp\" (UID: \"ab6e6231-c7d2-4c65-89d2-bd6771c99585\") " pod="metallb-system/metallb-operator-controller-manager-5dfffc88b-rknwp" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.616348 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ab6e6231-c7d2-4c65-89d2-bd6771c99585-apiservice-cert\") pod \"metallb-operator-controller-manager-5dfffc88b-rknwp\" (UID: \"ab6e6231-c7d2-4c65-89d2-bd6771c99585\") " pod="metallb-system/metallb-operator-controller-manager-5dfffc88b-rknwp" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.622061 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6997fd6b6c-rxw9p"] Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.629116 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th64z\" (UniqueName: \"kubernetes.io/projected/ab6e6231-c7d2-4c65-89d2-bd6771c99585-kube-api-access-th64z\") pod \"metallb-operator-controller-manager-5dfffc88b-rknwp\" (UID: \"ab6e6231-c7d2-4c65-89d2-bd6771c99585\") " pod="metallb-system/metallb-operator-controller-manager-5dfffc88b-rknwp" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.668611 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5dfffc88b-rknwp" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.708045 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cac01594-063e-4099-b7fc-11e5d034cd2c-webhook-cert\") pod \"metallb-operator-webhook-server-6997fd6b6c-rxw9p\" (UID: \"cac01594-063e-4099-b7fc-11e5d034cd2c\") " pod="metallb-system/metallb-operator-webhook-server-6997fd6b6c-rxw9p" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.708710 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d798k\" (UniqueName: \"kubernetes.io/projected/cac01594-063e-4099-b7fc-11e5d034cd2c-kube-api-access-d798k\") pod \"metallb-operator-webhook-server-6997fd6b6c-rxw9p\" (UID: \"cac01594-063e-4099-b7fc-11e5d034cd2c\") " pod="metallb-system/metallb-operator-webhook-server-6997fd6b6c-rxw9p" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.708732 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cac01594-063e-4099-b7fc-11e5d034cd2c-apiservice-cert\") pod \"metallb-operator-webhook-server-6997fd6b6c-rxw9p\" (UID: \"cac01594-063e-4099-b7fc-11e5d034cd2c\") " pod="metallb-system/metallb-operator-webhook-server-6997fd6b6c-rxw9p" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.719808 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cac01594-063e-4099-b7fc-11e5d034cd2c-apiservice-cert\") pod \"metallb-operator-webhook-server-6997fd6b6c-rxw9p\" (UID: \"cac01594-063e-4099-b7fc-11e5d034cd2c\") " pod="metallb-system/metallb-operator-webhook-server-6997fd6b6c-rxw9p" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.728000 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cac01594-063e-4099-b7fc-11e5d034cd2c-webhook-cert\") pod \"metallb-operator-webhook-server-6997fd6b6c-rxw9p\" (UID: \"cac01594-063e-4099-b7fc-11e5d034cd2c\") " pod="metallb-system/metallb-operator-webhook-server-6997fd6b6c-rxw9p" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.738865 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d798k\" (UniqueName: \"kubernetes.io/projected/cac01594-063e-4099-b7fc-11e5d034cd2c-kube-api-access-d798k\") pod \"metallb-operator-webhook-server-6997fd6b6c-rxw9p\" (UID: \"cac01594-063e-4099-b7fc-11e5d034cd2c\") " pod="metallb-system/metallb-operator-webhook-server-6997fd6b6c-rxw9p" Jan 31 03:59:45 crc kubenswrapper[4827]: I0131 03:59:45.919794 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6997fd6b6c-rxw9p" Jan 31 03:59:46 crc kubenswrapper[4827]: I0131 03:59:46.190329 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5dfffc88b-rknwp"] Jan 31 03:59:46 crc kubenswrapper[4827]: W0131 03:59:46.192807 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab6e6231_c7d2_4c65_89d2_bd6771c99585.slice/crio-b625005c8a871484ffa706e6e174ee080ea87dc48aa4d0eae094e83a62009ea7 WatchSource:0}: Error finding container b625005c8a871484ffa706e6e174ee080ea87dc48aa4d0eae094e83a62009ea7: Status 404 returned error can't find the container with id b625005c8a871484ffa706e6e174ee080ea87dc48aa4d0eae094e83a62009ea7 Jan 31 03:59:46 crc kubenswrapper[4827]: I0131 03:59:46.206548 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6997fd6b6c-rxw9p"] Jan 31 03:59:46 crc kubenswrapper[4827]: W0131 03:59:46.212260 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcac01594_063e_4099_b7fc_11e5d034cd2c.slice/crio-a2063d13e6bfff1eda5cfab0b6a5d2ce8743636352f6194ff5c679e1a694598e WatchSource:0}: Error finding container a2063d13e6bfff1eda5cfab0b6a5d2ce8743636352f6194ff5c679e1a694598e: Status 404 returned error can't find the container with id a2063d13e6bfff1eda5cfab0b6a5d2ce8743636352f6194ff5c679e1a694598e Jan 31 03:59:46 crc kubenswrapper[4827]: I0131 03:59:46.571702 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6997fd6b6c-rxw9p" event={"ID":"cac01594-063e-4099-b7fc-11e5d034cd2c","Type":"ContainerStarted","Data":"a2063d13e6bfff1eda5cfab0b6a5d2ce8743636352f6194ff5c679e1a694598e"} Jan 31 03:59:46 crc kubenswrapper[4827]: I0131 03:59:46.572781 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5dfffc88b-rknwp" event={"ID":"ab6e6231-c7d2-4c65-89d2-bd6771c99585","Type":"ContainerStarted","Data":"b625005c8a871484ffa706e6e174ee080ea87dc48aa4d0eae094e83a62009ea7"} Jan 31 03:59:47 crc kubenswrapper[4827]: I0131 03:59:47.371075 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 03:59:47 crc kubenswrapper[4827]: I0131 03:59:47.371136 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 03:59:47 crc kubenswrapper[4827]: I0131 03:59:47.371191 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 03:59:47 crc kubenswrapper[4827]: I0131 03:59:47.371767 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bfaefcdaba61a9df67ef38340b2b8e90d41a85b4a9bee50aad5651159c3ae7f7"} pod="openshift-machine-config-operator/machine-config-daemon-jxh94" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 03:59:47 crc kubenswrapper[4827]: I0131 03:59:47.371841 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" containerID="cri-o://bfaefcdaba61a9df67ef38340b2b8e90d41a85b4a9bee50aad5651159c3ae7f7" gracePeriod=600 Jan 31 03:59:47 crc kubenswrapper[4827]: I0131 03:59:47.587587 4827 generic.go:334] "Generic (PLEG): container finished" podID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerID="bfaefcdaba61a9df67ef38340b2b8e90d41a85b4a9bee50aad5651159c3ae7f7" exitCode=0 Jan 31 03:59:47 crc kubenswrapper[4827]: I0131 03:59:47.587696 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerDied","Data":"bfaefcdaba61a9df67ef38340b2b8e90d41a85b4a9bee50aad5651159c3ae7f7"} Jan 31 03:59:47 crc kubenswrapper[4827]: I0131 03:59:47.588152 4827 scope.go:117] "RemoveContainer" containerID="ff1002b4326b60d6728a9f4939c5459cec6a294d7d2af6d13663a334c8cece05" Jan 31 03:59:48 crc kubenswrapper[4827]: I0131 03:59:48.596510 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"b3f2ce1bddb590379802c11a41342b77994eb27a657cdaa9086c8e7edd46b860"} Jan 31 03:59:52 crc kubenswrapper[4827]: I0131 03:59:52.620991 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6997fd6b6c-rxw9p" event={"ID":"cac01594-063e-4099-b7fc-11e5d034cd2c","Type":"ContainerStarted","Data":"58b4ae42caeb17cca449337b7dd16ab855d6ed0d18951304597acd680d8f6790"} Jan 31 03:59:52 crc kubenswrapper[4827]: I0131 03:59:52.621474 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6997fd6b6c-rxw9p" Jan 31 03:59:52 crc kubenswrapper[4827]: I0131 03:59:52.645224 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6997fd6b6c-rxw9p" podStartSLOduration=1.738072836 podStartE2EDuration="7.645204849s" podCreationTimestamp="2026-01-31 03:59:45 +0000 UTC" firstStartedPulling="2026-01-31 03:59:46.215311149 +0000 UTC m=+778.902391598" lastFinishedPulling="2026-01-31 03:59:52.122443162 +0000 UTC m=+784.809523611" observedRunningTime="2026-01-31 03:59:52.644113016 +0000 UTC m=+785.331193475" watchObservedRunningTime="2026-01-31 03:59:52.645204849 +0000 UTC m=+785.332285298" Jan 31 03:59:52 crc kubenswrapper[4827]: I0131 03:59:52.661123 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vpg89" Jan 31 03:59:52 crc kubenswrapper[4827]: I0131 03:59:52.707093 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vpg89" Jan 31 03:59:52 crc kubenswrapper[4827]: I0131 03:59:52.894390 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vpg89"] Jan 31 03:59:54 crc kubenswrapper[4827]: I0131 03:59:54.636486 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vpg89" podUID="aa2d2648-f614-4da5-a5e6-f085c896d851" containerName="registry-server" containerID="cri-o://f5e7f88259fba084faeeb2ba7a56c3ddf064972b0b187a68505c2fe50e93f4f8" gracePeriod=2 Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.058354 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpg89" Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.193838 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa2d2648-f614-4da5-a5e6-f085c896d851-utilities\") pod \"aa2d2648-f614-4da5-a5e6-f085c896d851\" (UID: \"aa2d2648-f614-4da5-a5e6-f085c896d851\") " Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.193913 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa2d2648-f614-4da5-a5e6-f085c896d851-catalog-content\") pod \"aa2d2648-f614-4da5-a5e6-f085c896d851\" (UID: \"aa2d2648-f614-4da5-a5e6-f085c896d851\") " Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.193958 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnt6s\" (UniqueName: \"kubernetes.io/projected/aa2d2648-f614-4da5-a5e6-f085c896d851-kube-api-access-hnt6s\") pod \"aa2d2648-f614-4da5-a5e6-f085c896d851\" (UID: \"aa2d2648-f614-4da5-a5e6-f085c896d851\") " Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.194802 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa2d2648-f614-4da5-a5e6-f085c896d851-utilities" (OuterVolumeSpecName: "utilities") pod "aa2d2648-f614-4da5-a5e6-f085c896d851" (UID: "aa2d2648-f614-4da5-a5e6-f085c896d851"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.204052 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa2d2648-f614-4da5-a5e6-f085c896d851-kube-api-access-hnt6s" (OuterVolumeSpecName: "kube-api-access-hnt6s") pod "aa2d2648-f614-4da5-a5e6-f085c896d851" (UID: "aa2d2648-f614-4da5-a5e6-f085c896d851"). InnerVolumeSpecName "kube-api-access-hnt6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.296245 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa2d2648-f614-4da5-a5e6-f085c896d851-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.296787 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnt6s\" (UniqueName: \"kubernetes.io/projected/aa2d2648-f614-4da5-a5e6-f085c896d851-kube-api-access-hnt6s\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.335978 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa2d2648-f614-4da5-a5e6-f085c896d851-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa2d2648-f614-4da5-a5e6-f085c896d851" (UID: "aa2d2648-f614-4da5-a5e6-f085c896d851"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.398221 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa2d2648-f614-4da5-a5e6-f085c896d851-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.643844 4827 generic.go:334] "Generic (PLEG): container finished" podID="aa2d2648-f614-4da5-a5e6-f085c896d851" containerID="f5e7f88259fba084faeeb2ba7a56c3ddf064972b0b187a68505c2fe50e93f4f8" exitCode=0 Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.643917 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpg89" event={"ID":"aa2d2648-f614-4da5-a5e6-f085c896d851","Type":"ContainerDied","Data":"f5e7f88259fba084faeeb2ba7a56c3ddf064972b0b187a68505c2fe50e93f4f8"} Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.644192 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpg89" event={"ID":"aa2d2648-f614-4da5-a5e6-f085c896d851","Type":"ContainerDied","Data":"95fc87c3f71efdedc8e14f50052ba2619904eed27403f42220b8a5e689116e12"} Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.644213 4827 scope.go:117] "RemoveContainer" containerID="f5e7f88259fba084faeeb2ba7a56c3ddf064972b0b187a68505c2fe50e93f4f8" Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.643981 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpg89" Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.659311 4827 scope.go:117] "RemoveContainer" containerID="b08c3f0c69c4b2a08ba46470e607b8e041404e7a52f6a0e638212c18308c4feb" Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.681869 4827 scope.go:117] "RemoveContainer" containerID="9eae97a0da488780c2b4fe7b1916a176dc614d64fb33c6aaf238677b6d09d070" Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.699804 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vpg89"] Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.709966 4827 scope.go:117] "RemoveContainer" containerID="f5e7f88259fba084faeeb2ba7a56c3ddf064972b0b187a68505c2fe50e93f4f8" Jan 31 03:59:55 crc kubenswrapper[4827]: E0131 03:59:55.710968 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5e7f88259fba084faeeb2ba7a56c3ddf064972b0b187a68505c2fe50e93f4f8\": container with ID starting with f5e7f88259fba084faeeb2ba7a56c3ddf064972b0b187a68505c2fe50e93f4f8 not found: ID does not exist" containerID="f5e7f88259fba084faeeb2ba7a56c3ddf064972b0b187a68505c2fe50e93f4f8" Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.711035 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5e7f88259fba084faeeb2ba7a56c3ddf064972b0b187a68505c2fe50e93f4f8"} err="failed to get container status \"f5e7f88259fba084faeeb2ba7a56c3ddf064972b0b187a68505c2fe50e93f4f8\": rpc error: code = NotFound desc = could not find container \"f5e7f88259fba084faeeb2ba7a56c3ddf064972b0b187a68505c2fe50e93f4f8\": container with ID starting with f5e7f88259fba084faeeb2ba7a56c3ddf064972b0b187a68505c2fe50e93f4f8 not found: ID does not exist" Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.711076 4827 scope.go:117] "RemoveContainer" containerID="b08c3f0c69c4b2a08ba46470e607b8e041404e7a52f6a0e638212c18308c4feb" Jan 31 03:59:55 crc kubenswrapper[4827]: E0131 03:59:55.711612 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b08c3f0c69c4b2a08ba46470e607b8e041404e7a52f6a0e638212c18308c4feb\": container with ID starting with b08c3f0c69c4b2a08ba46470e607b8e041404e7a52f6a0e638212c18308c4feb not found: ID does not exist" containerID="b08c3f0c69c4b2a08ba46470e607b8e041404e7a52f6a0e638212c18308c4feb" Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.711711 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b08c3f0c69c4b2a08ba46470e607b8e041404e7a52f6a0e638212c18308c4feb"} err="failed to get container status \"b08c3f0c69c4b2a08ba46470e607b8e041404e7a52f6a0e638212c18308c4feb\": rpc error: code = NotFound desc = could not find container \"b08c3f0c69c4b2a08ba46470e607b8e041404e7a52f6a0e638212c18308c4feb\": container with ID starting with b08c3f0c69c4b2a08ba46470e607b8e041404e7a52f6a0e638212c18308c4feb not found: ID does not exist" Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.711825 4827 scope.go:117] "RemoveContainer" containerID="9eae97a0da488780c2b4fe7b1916a176dc614d64fb33c6aaf238677b6d09d070" Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.712587 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vpg89"] Jan 31 03:59:55 crc kubenswrapper[4827]: E0131 03:59:55.714635 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eae97a0da488780c2b4fe7b1916a176dc614d64fb33c6aaf238677b6d09d070\": container with ID starting with 9eae97a0da488780c2b4fe7b1916a176dc614d64fb33c6aaf238677b6d09d070 not found: ID does not exist" containerID="9eae97a0da488780c2b4fe7b1916a176dc614d64fb33c6aaf238677b6d09d070" Jan 31 03:59:55 crc kubenswrapper[4827]: I0131 03:59:55.714675 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eae97a0da488780c2b4fe7b1916a176dc614d64fb33c6aaf238677b6d09d070"} err="failed to get container status \"9eae97a0da488780c2b4fe7b1916a176dc614d64fb33c6aaf238677b6d09d070\": rpc error: code = NotFound desc = could not find container \"9eae97a0da488780c2b4fe7b1916a176dc614d64fb33c6aaf238677b6d09d070\": container with ID starting with 9eae97a0da488780c2b4fe7b1916a176dc614d64fb33c6aaf238677b6d09d070 not found: ID does not exist" Jan 31 03:59:56 crc kubenswrapper[4827]: I0131 03:59:56.118198 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa2d2648-f614-4da5-a5e6-f085c896d851" path="/var/lib/kubelet/pods/aa2d2648-f614-4da5-a5e6-f085c896d851/volumes" Jan 31 03:59:59 crc kubenswrapper[4827]: I0131 03:59:59.696710 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5dfffc88b-rknwp" event={"ID":"ab6e6231-c7d2-4c65-89d2-bd6771c99585","Type":"ContainerStarted","Data":"cd0a9dc9265c0d3a57f120dc6f6e96bf1856af3041896c5ee42bdf405f70b2fb"} Jan 31 03:59:59 crc kubenswrapper[4827]: I0131 03:59:59.697529 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5dfffc88b-rknwp" Jan 31 03:59:59 crc kubenswrapper[4827]: I0131 03:59:59.725137 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5dfffc88b-rknwp" podStartSLOduration=2.145639381 podStartE2EDuration="14.725107481s" podCreationTimestamp="2026-01-31 03:59:45 +0000 UTC" firstStartedPulling="2026-01-31 03:59:46.196683866 +0000 UTC m=+778.883764315" lastFinishedPulling="2026-01-31 03:59:58.776151966 +0000 UTC m=+791.463232415" observedRunningTime="2026-01-31 03:59:59.71902066 +0000 UTC m=+792.406101119" watchObservedRunningTime="2026-01-31 03:59:59.725107481 +0000 UTC m=+792.412187940" Jan 31 04:00:00 crc kubenswrapper[4827]: I0131 04:00:00.160110 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497200-khhqw"] Jan 31 04:00:00 crc kubenswrapper[4827]: E0131 04:00:00.160358 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa2d2648-f614-4da5-a5e6-f085c896d851" containerName="registry-server" Jan 31 04:00:00 crc kubenswrapper[4827]: I0131 04:00:00.160372 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa2d2648-f614-4da5-a5e6-f085c896d851" containerName="registry-server" Jan 31 04:00:00 crc kubenswrapper[4827]: E0131 04:00:00.160388 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa2d2648-f614-4da5-a5e6-f085c896d851" containerName="extract-utilities" Jan 31 04:00:00 crc kubenswrapper[4827]: I0131 04:00:00.160399 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa2d2648-f614-4da5-a5e6-f085c896d851" containerName="extract-utilities" Jan 31 04:00:00 crc kubenswrapper[4827]: E0131 04:00:00.160414 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa2d2648-f614-4da5-a5e6-f085c896d851" containerName="extract-content" Jan 31 04:00:00 crc kubenswrapper[4827]: I0131 04:00:00.160422 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa2d2648-f614-4da5-a5e6-f085c896d851" containerName="extract-content" Jan 31 04:00:00 crc kubenswrapper[4827]: I0131 04:00:00.160553 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa2d2648-f614-4da5-a5e6-f085c896d851" containerName="registry-server" Jan 31 04:00:00 crc kubenswrapper[4827]: I0131 04:00:00.160961 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-khhqw" Jan 31 04:00:00 crc kubenswrapper[4827]: I0131 04:00:00.163405 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 04:00:00 crc kubenswrapper[4827]: I0131 04:00:00.165589 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 04:00:00 crc kubenswrapper[4827]: I0131 04:00:00.175732 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497200-khhqw"] Jan 31 04:00:00 crc kubenswrapper[4827]: I0131 04:00:00.189630 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5e6d27b-159b-4fb1-98d7-da2ae62fe95b-config-volume\") pod \"collect-profiles-29497200-khhqw\" (UID: \"a5e6d27b-159b-4fb1-98d7-da2ae62fe95b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-khhqw" Jan 31 04:00:00 crc kubenswrapper[4827]: I0131 04:00:00.189964 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blgf4\" (UniqueName: \"kubernetes.io/projected/a5e6d27b-159b-4fb1-98d7-da2ae62fe95b-kube-api-access-blgf4\") pod \"collect-profiles-29497200-khhqw\" (UID: \"a5e6d27b-159b-4fb1-98d7-da2ae62fe95b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-khhqw" Jan 31 04:00:00 crc kubenswrapper[4827]: I0131 04:00:00.190059 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5e6d27b-159b-4fb1-98d7-da2ae62fe95b-secret-volume\") pod \"collect-profiles-29497200-khhqw\" (UID: \"a5e6d27b-159b-4fb1-98d7-da2ae62fe95b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-khhqw" Jan 31 04:00:00 crc kubenswrapper[4827]: I0131 04:00:00.292017 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5e6d27b-159b-4fb1-98d7-da2ae62fe95b-config-volume\") pod \"collect-profiles-29497200-khhqw\" (UID: \"a5e6d27b-159b-4fb1-98d7-da2ae62fe95b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-khhqw" Jan 31 04:00:00 crc kubenswrapper[4827]: I0131 04:00:00.292215 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blgf4\" (UniqueName: \"kubernetes.io/projected/a5e6d27b-159b-4fb1-98d7-da2ae62fe95b-kube-api-access-blgf4\") pod \"collect-profiles-29497200-khhqw\" (UID: \"a5e6d27b-159b-4fb1-98d7-da2ae62fe95b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-khhqw" Jan 31 04:00:00 crc kubenswrapper[4827]: I0131 04:00:00.292250 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5e6d27b-159b-4fb1-98d7-da2ae62fe95b-secret-volume\") pod \"collect-profiles-29497200-khhqw\" (UID: \"a5e6d27b-159b-4fb1-98d7-da2ae62fe95b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-khhqw" Jan 31 04:00:00 crc kubenswrapper[4827]: I0131 04:00:00.293455 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5e6d27b-159b-4fb1-98d7-da2ae62fe95b-config-volume\") pod \"collect-profiles-29497200-khhqw\" (UID: \"a5e6d27b-159b-4fb1-98d7-da2ae62fe95b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-khhqw" Jan 31 04:00:00 crc kubenswrapper[4827]: I0131 04:00:00.303524 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5e6d27b-159b-4fb1-98d7-da2ae62fe95b-secret-volume\") pod \"collect-profiles-29497200-khhqw\" (UID: \"a5e6d27b-159b-4fb1-98d7-da2ae62fe95b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-khhqw" Jan 31 04:00:00 crc kubenswrapper[4827]: I0131 04:00:00.313809 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blgf4\" (UniqueName: \"kubernetes.io/projected/a5e6d27b-159b-4fb1-98d7-da2ae62fe95b-kube-api-access-blgf4\") pod \"collect-profiles-29497200-khhqw\" (UID: \"a5e6d27b-159b-4fb1-98d7-da2ae62fe95b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-khhqw" Jan 31 04:00:00 crc kubenswrapper[4827]: I0131 04:00:00.478952 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-khhqw" Jan 31 04:00:00 crc kubenswrapper[4827]: I0131 04:00:00.763020 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497200-khhqw"] Jan 31 04:00:01 crc kubenswrapper[4827]: I0131 04:00:01.716968 4827 generic.go:334] "Generic (PLEG): container finished" podID="a5e6d27b-159b-4fb1-98d7-da2ae62fe95b" containerID="44c6701133eac7a009bcefa3b0f53fce3a6fe1aeb58d760fa68ca41dfa8f873b" exitCode=0 Jan 31 04:00:01 crc kubenswrapper[4827]: I0131 04:00:01.717452 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-khhqw" event={"ID":"a5e6d27b-159b-4fb1-98d7-da2ae62fe95b","Type":"ContainerDied","Data":"44c6701133eac7a009bcefa3b0f53fce3a6fe1aeb58d760fa68ca41dfa8f873b"} Jan 31 04:00:01 crc kubenswrapper[4827]: I0131 04:00:01.717492 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-khhqw" event={"ID":"a5e6d27b-159b-4fb1-98d7-da2ae62fe95b","Type":"ContainerStarted","Data":"a2481984127c3dbaa6616e6ff1421eda52f6177ae333d305966bd48cf9312ba7"} Jan 31 04:00:03 crc kubenswrapper[4827]: I0131 04:00:03.040429 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-khhqw" Jan 31 04:00:03 crc kubenswrapper[4827]: I0131 04:00:03.138229 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5e6d27b-159b-4fb1-98d7-da2ae62fe95b-secret-volume\") pod \"a5e6d27b-159b-4fb1-98d7-da2ae62fe95b\" (UID: \"a5e6d27b-159b-4fb1-98d7-da2ae62fe95b\") " Jan 31 04:00:03 crc kubenswrapper[4827]: I0131 04:00:03.138335 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5e6d27b-159b-4fb1-98d7-da2ae62fe95b-config-volume\") pod \"a5e6d27b-159b-4fb1-98d7-da2ae62fe95b\" (UID: \"a5e6d27b-159b-4fb1-98d7-da2ae62fe95b\") " Jan 31 04:00:03 crc kubenswrapper[4827]: I0131 04:00:03.138379 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blgf4\" (UniqueName: \"kubernetes.io/projected/a5e6d27b-159b-4fb1-98d7-da2ae62fe95b-kube-api-access-blgf4\") pod \"a5e6d27b-159b-4fb1-98d7-da2ae62fe95b\" (UID: \"a5e6d27b-159b-4fb1-98d7-da2ae62fe95b\") " Jan 31 04:00:03 crc kubenswrapper[4827]: I0131 04:00:03.139570 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5e6d27b-159b-4fb1-98d7-da2ae62fe95b-config-volume" (OuterVolumeSpecName: "config-volume") pod "a5e6d27b-159b-4fb1-98d7-da2ae62fe95b" (UID: "a5e6d27b-159b-4fb1-98d7-da2ae62fe95b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:00:03 crc kubenswrapper[4827]: I0131 04:00:03.140140 4827 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5e6d27b-159b-4fb1-98d7-da2ae62fe95b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:00:03 crc kubenswrapper[4827]: I0131 04:00:03.145176 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5e6d27b-159b-4fb1-98d7-da2ae62fe95b-kube-api-access-blgf4" (OuterVolumeSpecName: "kube-api-access-blgf4") pod "a5e6d27b-159b-4fb1-98d7-da2ae62fe95b" (UID: "a5e6d27b-159b-4fb1-98d7-da2ae62fe95b"). InnerVolumeSpecName "kube-api-access-blgf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:00:03 crc kubenswrapper[4827]: I0131 04:00:03.146223 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5e6d27b-159b-4fb1-98d7-da2ae62fe95b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a5e6d27b-159b-4fb1-98d7-da2ae62fe95b" (UID: "a5e6d27b-159b-4fb1-98d7-da2ae62fe95b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:00:03 crc kubenswrapper[4827]: I0131 04:00:03.241159 4827 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5e6d27b-159b-4fb1-98d7-da2ae62fe95b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:00:03 crc kubenswrapper[4827]: I0131 04:00:03.241428 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blgf4\" (UniqueName: \"kubernetes.io/projected/a5e6d27b-159b-4fb1-98d7-da2ae62fe95b-kube-api-access-blgf4\") on node \"crc\" DevicePath \"\"" Jan 31 04:00:03 crc kubenswrapper[4827]: I0131 04:00:03.742079 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-khhqw" event={"ID":"a5e6d27b-159b-4fb1-98d7-da2ae62fe95b","Type":"ContainerDied","Data":"a2481984127c3dbaa6616e6ff1421eda52f6177ae333d305966bd48cf9312ba7"} Jan 31 04:00:03 crc kubenswrapper[4827]: I0131 04:00:03.742133 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2481984127c3dbaa6616e6ff1421eda52f6177ae333d305966bd48cf9312ba7" Jan 31 04:00:03 crc kubenswrapper[4827]: I0131 04:00:03.742203 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-khhqw" Jan 31 04:00:05 crc kubenswrapper[4827]: I0131 04:00:05.927007 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6997fd6b6c-rxw9p" Jan 31 04:00:35 crc kubenswrapper[4827]: I0131 04:00:35.674354 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5dfffc88b-rknwp" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.404919 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-nd7t7"] Jan 31 04:00:36 crc kubenswrapper[4827]: E0131 04:00:36.405262 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e6d27b-159b-4fb1-98d7-da2ae62fe95b" containerName="collect-profiles" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.405286 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e6d27b-159b-4fb1-98d7-da2ae62fe95b" containerName="collect-profiles" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.405467 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5e6d27b-159b-4fb1-98d7-da2ae62fe95b" containerName="collect-profiles" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.408063 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.410257 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.410429 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-jcx59" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.410514 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.422507 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4q2p4"] Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.423387 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4q2p4" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.426583 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.435877 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4q2p4"] Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.472177 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ac6685a1-0994-4fb9-afe1-3454c8525094-metrics\") pod \"frr-k8s-nd7t7\" (UID: \"ac6685a1-0994-4fb9-afe1-3454c8525094\") " pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.472228 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zgwb\" (UniqueName: \"kubernetes.io/projected/aa13a755-e11c-471f-9318-7f0b54e8889e-kube-api-access-2zgwb\") pod \"frr-k8s-webhook-server-7df86c4f6c-4q2p4\" (UID: \"aa13a755-e11c-471f-9318-7f0b54e8889e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4q2p4" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.472256 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac6685a1-0994-4fb9-afe1-3454c8525094-metrics-certs\") pod \"frr-k8s-nd7t7\" (UID: \"ac6685a1-0994-4fb9-afe1-3454c8525094\") " pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.472273 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ac6685a1-0994-4fb9-afe1-3454c8525094-reloader\") pod \"frr-k8s-nd7t7\" (UID: \"ac6685a1-0994-4fb9-afe1-3454c8525094\") " pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.472294 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ac6685a1-0994-4fb9-afe1-3454c8525094-frr-conf\") pod \"frr-k8s-nd7t7\" (UID: \"ac6685a1-0994-4fb9-afe1-3454c8525094\") " pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.472318 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aa13a755-e11c-471f-9318-7f0b54e8889e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4q2p4\" (UID: \"aa13a755-e11c-471f-9318-7f0b54e8889e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4q2p4" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.472338 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7g8j\" (UniqueName: \"kubernetes.io/projected/ac6685a1-0994-4fb9-afe1-3454c8525094-kube-api-access-r7g8j\") pod \"frr-k8s-nd7t7\" (UID: \"ac6685a1-0994-4fb9-afe1-3454c8525094\") " pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.472359 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ac6685a1-0994-4fb9-afe1-3454c8525094-frr-startup\") pod \"frr-k8s-nd7t7\" (UID: \"ac6685a1-0994-4fb9-afe1-3454c8525094\") " pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.472376 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ac6685a1-0994-4fb9-afe1-3454c8525094-frr-sockets\") pod \"frr-k8s-nd7t7\" (UID: \"ac6685a1-0994-4fb9-afe1-3454c8525094\") " pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.503050 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-tp4jl"] Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.504371 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tp4jl" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.506474 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-qcvf7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.506838 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.507131 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.507233 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.529559 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-lxksh"] Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.530415 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-lxksh" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.532072 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.551434 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-lxksh"] Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.575911 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44cea0f8-c757-4c9e-bd44-210bed605301-metrics-certs\") pod \"speaker-tp4jl\" (UID: \"44cea0f8-c757-4c9e-bd44-210bed605301\") " pod="metallb-system/speaker-tp4jl" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.576224 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zgwb\" (UniqueName: \"kubernetes.io/projected/aa13a755-e11c-471f-9318-7f0b54e8889e-kube-api-access-2zgwb\") pod \"frr-k8s-webhook-server-7df86c4f6c-4q2p4\" (UID: \"aa13a755-e11c-471f-9318-7f0b54e8889e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4q2p4" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.576444 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac6685a1-0994-4fb9-afe1-3454c8525094-metrics-certs\") pod \"frr-k8s-nd7t7\" (UID: \"ac6685a1-0994-4fb9-afe1-3454c8525094\") " pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.576532 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ac6685a1-0994-4fb9-afe1-3454c8525094-reloader\") pod \"frr-k8s-nd7t7\" (UID: \"ac6685a1-0994-4fb9-afe1-3454c8525094\") " pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.576634 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/44cea0f8-c757-4c9e-bd44-210bed605301-metallb-excludel2\") pod \"speaker-tp4jl\" (UID: \"44cea0f8-c757-4c9e-bd44-210bed605301\") " pod="metallb-system/speaker-tp4jl" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.576713 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ac6685a1-0994-4fb9-afe1-3454c8525094-frr-conf\") pod \"frr-k8s-nd7t7\" (UID: \"ac6685a1-0994-4fb9-afe1-3454c8525094\") " pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.576793 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/44cea0f8-c757-4c9e-bd44-210bed605301-memberlist\") pod \"speaker-tp4jl\" (UID: \"44cea0f8-c757-4c9e-bd44-210bed605301\") " pod="metallb-system/speaker-tp4jl" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.576873 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aa13a755-e11c-471f-9318-7f0b54e8889e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4q2p4\" (UID: \"aa13a755-e11c-471f-9318-7f0b54e8889e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4q2p4" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.576972 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7g8j\" (UniqueName: \"kubernetes.io/projected/ac6685a1-0994-4fb9-afe1-3454c8525094-kube-api-access-r7g8j\") pod \"frr-k8s-nd7t7\" (UID: \"ac6685a1-0994-4fb9-afe1-3454c8525094\") " pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.577025 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ac6685a1-0994-4fb9-afe1-3454c8525094-reloader\") pod \"frr-k8s-nd7t7\" (UID: \"ac6685a1-0994-4fb9-afe1-3454c8525094\") " pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.577098 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkrb8\" (UniqueName: \"kubernetes.io/projected/44cea0f8-c757-4c9e-bd44-210bed605301-kube-api-access-rkrb8\") pod \"speaker-tp4jl\" (UID: \"44cea0f8-c757-4c9e-bd44-210bed605301\") " pod="metallb-system/speaker-tp4jl" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.577177 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ac6685a1-0994-4fb9-afe1-3454c8525094-frr-startup\") pod \"frr-k8s-nd7t7\" (UID: \"ac6685a1-0994-4fb9-afe1-3454c8525094\") " pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.577251 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ac6685a1-0994-4fb9-afe1-3454c8525094-frr-sockets\") pod \"frr-k8s-nd7t7\" (UID: \"ac6685a1-0994-4fb9-afe1-3454c8525094\") " pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.577334 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3aa38fee-8a56-42e4-9921-52dfdc3550c0-cert\") pod \"controller-6968d8fdc4-lxksh\" (UID: \"3aa38fee-8a56-42e4-9921-52dfdc3550c0\") " pod="metallb-system/controller-6968d8fdc4-lxksh" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.577412 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3aa38fee-8a56-42e4-9921-52dfdc3550c0-metrics-certs\") pod \"controller-6968d8fdc4-lxksh\" (UID: \"3aa38fee-8a56-42e4-9921-52dfdc3550c0\") " pod="metallb-system/controller-6968d8fdc4-lxksh" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.577224 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ac6685a1-0994-4fb9-afe1-3454c8525094-frr-conf\") pod \"frr-k8s-nd7t7\" (UID: \"ac6685a1-0994-4fb9-afe1-3454c8525094\") " pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.577601 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ac6685a1-0994-4fb9-afe1-3454c8525094-metrics\") pod \"frr-k8s-nd7t7\" (UID: \"ac6685a1-0994-4fb9-afe1-3454c8525094\") " pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.577711 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w74k8\" (UniqueName: \"kubernetes.io/projected/3aa38fee-8a56-42e4-9921-52dfdc3550c0-kube-api-access-w74k8\") pod \"controller-6968d8fdc4-lxksh\" (UID: \"3aa38fee-8a56-42e4-9921-52dfdc3550c0\") " pod="metallb-system/controller-6968d8fdc4-lxksh" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.577777 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ac6685a1-0994-4fb9-afe1-3454c8525094-frr-sockets\") pod \"frr-k8s-nd7t7\" (UID: \"ac6685a1-0994-4fb9-afe1-3454c8525094\") " pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.577892 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ac6685a1-0994-4fb9-afe1-3454c8525094-metrics\") pod \"frr-k8s-nd7t7\" (UID: \"ac6685a1-0994-4fb9-afe1-3454c8525094\") " pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.578185 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ac6685a1-0994-4fb9-afe1-3454c8525094-frr-startup\") pod \"frr-k8s-nd7t7\" (UID: \"ac6685a1-0994-4fb9-afe1-3454c8525094\") " pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.584624 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac6685a1-0994-4fb9-afe1-3454c8525094-metrics-certs\") pod \"frr-k8s-nd7t7\" (UID: \"ac6685a1-0994-4fb9-afe1-3454c8525094\") " pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.584683 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aa13a755-e11c-471f-9318-7f0b54e8889e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4q2p4\" (UID: \"aa13a755-e11c-471f-9318-7f0b54e8889e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4q2p4" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.593228 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7g8j\" (UniqueName: \"kubernetes.io/projected/ac6685a1-0994-4fb9-afe1-3454c8525094-kube-api-access-r7g8j\") pod \"frr-k8s-nd7t7\" (UID: \"ac6685a1-0994-4fb9-afe1-3454c8525094\") " pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.604482 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zgwb\" (UniqueName: \"kubernetes.io/projected/aa13a755-e11c-471f-9318-7f0b54e8889e-kube-api-access-2zgwb\") pod \"frr-k8s-webhook-server-7df86c4f6c-4q2p4\" (UID: \"aa13a755-e11c-471f-9318-7f0b54e8889e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4q2p4" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.678995 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3aa38fee-8a56-42e4-9921-52dfdc3550c0-cert\") pod \"controller-6968d8fdc4-lxksh\" (UID: \"3aa38fee-8a56-42e4-9921-52dfdc3550c0\") " pod="metallb-system/controller-6968d8fdc4-lxksh" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.679139 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3aa38fee-8a56-42e4-9921-52dfdc3550c0-metrics-certs\") pod \"controller-6968d8fdc4-lxksh\" (UID: \"3aa38fee-8a56-42e4-9921-52dfdc3550c0\") " pod="metallb-system/controller-6968d8fdc4-lxksh" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.679176 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w74k8\" (UniqueName: \"kubernetes.io/projected/3aa38fee-8a56-42e4-9921-52dfdc3550c0-kube-api-access-w74k8\") pod \"controller-6968d8fdc4-lxksh\" (UID: \"3aa38fee-8a56-42e4-9921-52dfdc3550c0\") " pod="metallb-system/controller-6968d8fdc4-lxksh" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.679193 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44cea0f8-c757-4c9e-bd44-210bed605301-metrics-certs\") pod \"speaker-tp4jl\" (UID: \"44cea0f8-c757-4c9e-bd44-210bed605301\") " pod="metallb-system/speaker-tp4jl" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.679218 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/44cea0f8-c757-4c9e-bd44-210bed605301-metallb-excludel2\") pod \"speaker-tp4jl\" (UID: \"44cea0f8-c757-4c9e-bd44-210bed605301\") " pod="metallb-system/speaker-tp4jl" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.679999 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/44cea0f8-c757-4c9e-bd44-210bed605301-memberlist\") pod \"speaker-tp4jl\" (UID: \"44cea0f8-c757-4c9e-bd44-210bed605301\") " pod="metallb-system/speaker-tp4jl" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.680030 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkrb8\" (UniqueName: \"kubernetes.io/projected/44cea0f8-c757-4c9e-bd44-210bed605301-kube-api-access-rkrb8\") pod \"speaker-tp4jl\" (UID: \"44cea0f8-c757-4c9e-bd44-210bed605301\") " pod="metallb-system/speaker-tp4jl" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.680337 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/44cea0f8-c757-4c9e-bd44-210bed605301-metallb-excludel2\") pod \"speaker-tp4jl\" (UID: \"44cea0f8-c757-4c9e-bd44-210bed605301\") " pod="metallb-system/speaker-tp4jl" Jan 31 04:00:36 crc kubenswrapper[4827]: E0131 04:00:36.680364 4827 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 04:00:36 crc kubenswrapper[4827]: E0131 04:00:36.680487 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44cea0f8-c757-4c9e-bd44-210bed605301-memberlist podName:44cea0f8-c757-4c9e-bd44-210bed605301 nodeName:}" failed. No retries permitted until 2026-01-31 04:00:37.180463266 +0000 UTC m=+829.867543715 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/44cea0f8-c757-4c9e-bd44-210bed605301-memberlist") pod "speaker-tp4jl" (UID: "44cea0f8-c757-4c9e-bd44-210bed605301") : secret "metallb-memberlist" not found Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.680707 4827 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.682144 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44cea0f8-c757-4c9e-bd44-210bed605301-metrics-certs\") pod \"speaker-tp4jl\" (UID: \"44cea0f8-c757-4c9e-bd44-210bed605301\") " pod="metallb-system/speaker-tp4jl" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.682591 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3aa38fee-8a56-42e4-9921-52dfdc3550c0-metrics-certs\") pod \"controller-6968d8fdc4-lxksh\" (UID: \"3aa38fee-8a56-42e4-9921-52dfdc3550c0\") " pod="metallb-system/controller-6968d8fdc4-lxksh" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.694503 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3aa38fee-8a56-42e4-9921-52dfdc3550c0-cert\") pod \"controller-6968d8fdc4-lxksh\" (UID: \"3aa38fee-8a56-42e4-9921-52dfdc3550c0\") " pod="metallb-system/controller-6968d8fdc4-lxksh" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.694751 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w74k8\" (UniqueName: \"kubernetes.io/projected/3aa38fee-8a56-42e4-9921-52dfdc3550c0-kube-api-access-w74k8\") pod \"controller-6968d8fdc4-lxksh\" (UID: \"3aa38fee-8a56-42e4-9921-52dfdc3550c0\") " pod="metallb-system/controller-6968d8fdc4-lxksh" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.700411 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkrb8\" (UniqueName: \"kubernetes.io/projected/44cea0f8-c757-4c9e-bd44-210bed605301-kube-api-access-rkrb8\") pod \"speaker-tp4jl\" (UID: \"44cea0f8-c757-4c9e-bd44-210bed605301\") " pod="metallb-system/speaker-tp4jl" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.727552 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.742058 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4q2p4" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.842865 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-lxksh" Jan 31 04:00:36 crc kubenswrapper[4827]: I0131 04:00:36.960466 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7t7" event={"ID":"ac6685a1-0994-4fb9-afe1-3454c8525094","Type":"ContainerStarted","Data":"48ed61e165df0f3abd8927f5e45c94883aa4ab18dd7196a3268fd9cff9bd6253"} Jan 31 04:00:37 crc kubenswrapper[4827]: I0131 04:00:37.004716 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4q2p4"] Jan 31 04:00:37 crc kubenswrapper[4827]: W0131 04:00:37.011471 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa13a755_e11c_471f_9318_7f0b54e8889e.slice/crio-877441bfff939d929a94a6f7d20237e562b95f68d7b9c26cc074ccbc7f98d353 WatchSource:0}: Error finding container 877441bfff939d929a94a6f7d20237e562b95f68d7b9c26cc074ccbc7f98d353: Status 404 returned error can't find the container with id 877441bfff939d929a94a6f7d20237e562b95f68d7b9c26cc074ccbc7f98d353 Jan 31 04:00:37 crc kubenswrapper[4827]: I0131 04:00:37.059421 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-lxksh"] Jan 31 04:00:37 crc kubenswrapper[4827]: W0131 04:00:37.062066 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3aa38fee_8a56_42e4_9921_52dfdc3550c0.slice/crio-61a2b2a33162082a97268da9a962e227444ba5446771c1aa3df8c10e6f6ef751 WatchSource:0}: Error finding container 61a2b2a33162082a97268da9a962e227444ba5446771c1aa3df8c10e6f6ef751: Status 404 returned error can't find the container with id 61a2b2a33162082a97268da9a962e227444ba5446771c1aa3df8c10e6f6ef751 Jan 31 04:00:37 crc kubenswrapper[4827]: I0131 04:00:37.189396 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/44cea0f8-c757-4c9e-bd44-210bed605301-memberlist\") pod \"speaker-tp4jl\" (UID: \"44cea0f8-c757-4c9e-bd44-210bed605301\") " pod="metallb-system/speaker-tp4jl" Jan 31 04:00:37 crc kubenswrapper[4827]: E0131 04:00:37.189528 4827 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 04:00:37 crc kubenswrapper[4827]: E0131 04:00:37.189566 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44cea0f8-c757-4c9e-bd44-210bed605301-memberlist podName:44cea0f8-c757-4c9e-bd44-210bed605301 nodeName:}" failed. No retries permitted until 2026-01-31 04:00:38.189554017 +0000 UTC m=+830.876634466 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/44cea0f8-c757-4c9e-bd44-210bed605301-memberlist") pod "speaker-tp4jl" (UID: "44cea0f8-c757-4c9e-bd44-210bed605301") : secret "metallb-memberlist" not found Jan 31 04:00:37 crc kubenswrapper[4827]: I0131 04:00:37.968111 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4q2p4" event={"ID":"aa13a755-e11c-471f-9318-7f0b54e8889e","Type":"ContainerStarted","Data":"877441bfff939d929a94a6f7d20237e562b95f68d7b9c26cc074ccbc7f98d353"} Jan 31 04:00:37 crc kubenswrapper[4827]: I0131 04:00:37.970278 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lxksh" event={"ID":"3aa38fee-8a56-42e4-9921-52dfdc3550c0","Type":"ContainerStarted","Data":"abf791e9a2f41d2cd55f186d6a91d7f52c629510d1946258a18dc41ddb781754"} Jan 31 04:00:37 crc kubenswrapper[4827]: I0131 04:00:37.970332 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lxksh" event={"ID":"3aa38fee-8a56-42e4-9921-52dfdc3550c0","Type":"ContainerStarted","Data":"01ce5b18a95be17d3b05ad95cb5911c40036e999633a29a0f7d769f86e668986"} Jan 31 04:00:37 crc kubenswrapper[4827]: I0131 04:00:37.970351 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lxksh" event={"ID":"3aa38fee-8a56-42e4-9921-52dfdc3550c0","Type":"ContainerStarted","Data":"61a2b2a33162082a97268da9a962e227444ba5446771c1aa3df8c10e6f6ef751"} Jan 31 04:00:37 crc kubenswrapper[4827]: I0131 04:00:37.970471 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-lxksh" Jan 31 04:00:37 crc kubenswrapper[4827]: I0131 04:00:37.988958 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-lxksh" podStartSLOduration=1.988941186 podStartE2EDuration="1.988941186s" podCreationTimestamp="2026-01-31 04:00:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:00:37.98735803 +0000 UTC m=+830.674438489" watchObservedRunningTime="2026-01-31 04:00:37.988941186 +0000 UTC m=+830.676021635" Jan 31 04:00:38 crc kubenswrapper[4827]: I0131 04:00:38.206144 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/44cea0f8-c757-4c9e-bd44-210bed605301-memberlist\") pod \"speaker-tp4jl\" (UID: \"44cea0f8-c757-4c9e-bd44-210bed605301\") " pod="metallb-system/speaker-tp4jl" Jan 31 04:00:38 crc kubenswrapper[4827]: I0131 04:00:38.215781 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/44cea0f8-c757-4c9e-bd44-210bed605301-memberlist\") pod \"speaker-tp4jl\" (UID: \"44cea0f8-c757-4c9e-bd44-210bed605301\") " pod="metallb-system/speaker-tp4jl" Jan 31 04:00:38 crc kubenswrapper[4827]: I0131 04:00:38.319737 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tp4jl" Jan 31 04:00:38 crc kubenswrapper[4827]: W0131 04:00:38.338955 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44cea0f8_c757_4c9e_bd44_210bed605301.slice/crio-7e26e6c2837135fc57ed339f524e936338d0a98b3488758c4e55005eae57f602 WatchSource:0}: Error finding container 7e26e6c2837135fc57ed339f524e936338d0a98b3488758c4e55005eae57f602: Status 404 returned error can't find the container with id 7e26e6c2837135fc57ed339f524e936338d0a98b3488758c4e55005eae57f602 Jan 31 04:00:38 crc kubenswrapper[4827]: I0131 04:00:38.990697 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tp4jl" event={"ID":"44cea0f8-c757-4c9e-bd44-210bed605301","Type":"ContainerStarted","Data":"56c8f7454ac664859b991efe9fbed26591955a097cc90a5c3d46235bc34d0b6b"} Jan 31 04:00:38 crc kubenswrapper[4827]: I0131 04:00:38.991041 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tp4jl" event={"ID":"44cea0f8-c757-4c9e-bd44-210bed605301","Type":"ContainerStarted","Data":"786ee64b6374e8ab2040186684d56f9f7e8cb8c34141046c4f1ed1886aa2f6cb"} Jan 31 04:00:38 crc kubenswrapper[4827]: I0131 04:00:38.991053 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tp4jl" event={"ID":"44cea0f8-c757-4c9e-bd44-210bed605301","Type":"ContainerStarted","Data":"7e26e6c2837135fc57ed339f524e936338d0a98b3488758c4e55005eae57f602"} Jan 31 04:00:38 crc kubenswrapper[4827]: I0131 04:00:38.991767 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-tp4jl" Jan 31 04:00:45 crc kubenswrapper[4827]: I0131 04:00:45.036239 4827 generic.go:334] "Generic (PLEG): container finished" podID="ac6685a1-0994-4fb9-afe1-3454c8525094" containerID="99cf0d917cf15c338548bbb6c167e9471855e8f7263ad7ec3d8e2e3dbb9f8c27" exitCode=0 Jan 31 04:00:45 crc kubenswrapper[4827]: I0131 04:00:45.036805 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7t7" event={"ID":"ac6685a1-0994-4fb9-afe1-3454c8525094","Type":"ContainerDied","Data":"99cf0d917cf15c338548bbb6c167e9471855e8f7263ad7ec3d8e2e3dbb9f8c27"} Jan 31 04:00:45 crc kubenswrapper[4827]: I0131 04:00:45.041030 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4q2p4" event={"ID":"aa13a755-e11c-471f-9318-7f0b54e8889e","Type":"ContainerStarted","Data":"ef91b46613b5e0fd78e33e0b1420afe829e44c43a25470d06ae35c79d5fa9312"} Jan 31 04:00:45 crc kubenswrapper[4827]: I0131 04:00:45.041217 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4q2p4" Jan 31 04:00:45 crc kubenswrapper[4827]: I0131 04:00:45.073260 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-tp4jl" podStartSLOduration=9.073244999 podStartE2EDuration="9.073244999s" podCreationTimestamp="2026-01-31 04:00:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:00:39.016106246 +0000 UTC m=+831.703186695" watchObservedRunningTime="2026-01-31 04:00:45.073244999 +0000 UTC m=+837.760325448" Jan 31 04:00:45 crc kubenswrapper[4827]: I0131 04:00:45.091355 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4q2p4" podStartSLOduration=1.721074104 podStartE2EDuration="9.091327496s" podCreationTimestamp="2026-01-31 04:00:36 +0000 UTC" firstStartedPulling="2026-01-31 04:00:37.014797792 +0000 UTC m=+829.701878241" lastFinishedPulling="2026-01-31 04:00:44.385051174 +0000 UTC m=+837.072131633" observedRunningTime="2026-01-31 04:00:45.089399979 +0000 UTC m=+837.776480458" watchObservedRunningTime="2026-01-31 04:00:45.091327496 +0000 UTC m=+837.778407975" Jan 31 04:00:46 crc kubenswrapper[4827]: I0131 04:00:46.050897 4827 generic.go:334] "Generic (PLEG): container finished" podID="ac6685a1-0994-4fb9-afe1-3454c8525094" containerID="30d45608d632a320f04fb2187235381a47f3d9a21a7cf316575f302ae04e94e5" exitCode=0 Jan 31 04:00:46 crc kubenswrapper[4827]: I0131 04:00:46.050981 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7t7" event={"ID":"ac6685a1-0994-4fb9-afe1-3454c8525094","Type":"ContainerDied","Data":"30d45608d632a320f04fb2187235381a47f3d9a21a7cf316575f302ae04e94e5"} Jan 31 04:00:47 crc kubenswrapper[4827]: I0131 04:00:47.062538 4827 generic.go:334] "Generic (PLEG): container finished" podID="ac6685a1-0994-4fb9-afe1-3454c8525094" containerID="5c0df87b74cbab697c07b7598a1bc3dc0b59a8100aef920dc5868275d247bd0b" exitCode=0 Jan 31 04:00:47 crc kubenswrapper[4827]: I0131 04:00:47.062598 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7t7" event={"ID":"ac6685a1-0994-4fb9-afe1-3454c8525094","Type":"ContainerDied","Data":"5c0df87b74cbab697c07b7598a1bc3dc0b59a8100aef920dc5868275d247bd0b"} Jan 31 04:00:48 crc kubenswrapper[4827]: I0131 04:00:48.075754 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7t7" event={"ID":"ac6685a1-0994-4fb9-afe1-3454c8525094","Type":"ContainerStarted","Data":"1d938d7b601cc182595769de19047c9a20acab712c1a5827b9f6f2571dc3dc75"} Jan 31 04:00:48 crc kubenswrapper[4827]: I0131 04:00:48.076229 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7t7" event={"ID":"ac6685a1-0994-4fb9-afe1-3454c8525094","Type":"ContainerStarted","Data":"8848c7ba4bd83a9b3216b7955c1b51ea548bccb7af8cf81695d63b941c3b2da8"} Jan 31 04:00:48 crc kubenswrapper[4827]: I0131 04:00:48.076245 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7t7" event={"ID":"ac6685a1-0994-4fb9-afe1-3454c8525094","Type":"ContainerStarted","Data":"dbf79f8fcf45fc0199214dd92d88357f7f875e837d87db1ad2f5b0ac4ffaaed8"} Jan 31 04:00:48 crc kubenswrapper[4827]: I0131 04:00:48.076257 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7t7" event={"ID":"ac6685a1-0994-4fb9-afe1-3454c8525094","Type":"ContainerStarted","Data":"21cce4f6fef16bca930fc146fae0f0e8b552eea0eed0bee729f4164cf4a7ba8f"} Jan 31 04:00:48 crc kubenswrapper[4827]: I0131 04:00:48.076278 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7t7" event={"ID":"ac6685a1-0994-4fb9-afe1-3454c8525094","Type":"ContainerStarted","Data":"8e5821fa066b29357c0850d68387764a6800dd8895840254bfcca337f9f988ef"} Jan 31 04:00:48 crc kubenswrapper[4827]: I0131 04:00:48.324543 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-tp4jl" Jan 31 04:00:49 crc kubenswrapper[4827]: I0131 04:00:49.088022 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7t7" event={"ID":"ac6685a1-0994-4fb9-afe1-3454c8525094","Type":"ContainerStarted","Data":"6f8710c119668122bc0428b9c78a334828910407f9a1477327caae24cc3c48ec"} Jan 31 04:00:49 crc kubenswrapper[4827]: I0131 04:00:49.088295 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:49 crc kubenswrapper[4827]: I0131 04:00:49.120652 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-nd7t7" podStartSLOduration=5.641080817 podStartE2EDuration="13.120629897s" podCreationTimestamp="2026-01-31 04:00:36 +0000 UTC" firstStartedPulling="2026-01-31 04:00:36.906516364 +0000 UTC m=+829.593596813" lastFinishedPulling="2026-01-31 04:00:44.386065404 +0000 UTC m=+837.073145893" observedRunningTime="2026-01-31 04:00:49.11903498 +0000 UTC m=+841.806115469" watchObservedRunningTime="2026-01-31 04:00:49.120629897 +0000 UTC m=+841.807710356" Jan 31 04:00:51 crc kubenswrapper[4827]: I0131 04:00:51.302196 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-k2dpj"] Jan 31 04:00:51 crc kubenswrapper[4827]: I0131 04:00:51.303982 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k2dpj" Jan 31 04:00:51 crc kubenswrapper[4827]: I0131 04:00:51.306278 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 31 04:00:51 crc kubenswrapper[4827]: I0131 04:00:51.307220 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 31 04:00:51 crc kubenswrapper[4827]: I0131 04:00:51.308005 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-86jmt" Jan 31 04:00:51 crc kubenswrapper[4827]: I0131 04:00:51.328635 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-k2dpj"] Jan 31 04:00:51 crc kubenswrapper[4827]: I0131 04:00:51.503179 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwkfb\" (UniqueName: \"kubernetes.io/projected/02572d7b-fb27-4201-8c5d-f5fabbc94fee-kube-api-access-wwkfb\") pod \"openstack-operator-index-k2dpj\" (UID: \"02572d7b-fb27-4201-8c5d-f5fabbc94fee\") " pod="openstack-operators/openstack-operator-index-k2dpj" Jan 31 04:00:51 crc kubenswrapper[4827]: I0131 04:00:51.605550 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwkfb\" (UniqueName: \"kubernetes.io/projected/02572d7b-fb27-4201-8c5d-f5fabbc94fee-kube-api-access-wwkfb\") pod \"openstack-operator-index-k2dpj\" (UID: \"02572d7b-fb27-4201-8c5d-f5fabbc94fee\") " pod="openstack-operators/openstack-operator-index-k2dpj" Jan 31 04:00:51 crc kubenswrapper[4827]: I0131 04:00:51.640311 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwkfb\" (UniqueName: \"kubernetes.io/projected/02572d7b-fb27-4201-8c5d-f5fabbc94fee-kube-api-access-wwkfb\") pod \"openstack-operator-index-k2dpj\" (UID: \"02572d7b-fb27-4201-8c5d-f5fabbc94fee\") " pod="openstack-operators/openstack-operator-index-k2dpj" Jan 31 04:00:51 crc kubenswrapper[4827]: I0131 04:00:51.642787 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k2dpj" Jan 31 04:00:51 crc kubenswrapper[4827]: I0131 04:00:51.728352 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:51 crc kubenswrapper[4827]: I0131 04:00:51.797044 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:00:51 crc kubenswrapper[4827]: I0131 04:00:51.912370 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-k2dpj"] Jan 31 04:00:51 crc kubenswrapper[4827]: W0131 04:00:51.918643 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02572d7b_fb27_4201_8c5d_f5fabbc94fee.slice/crio-c5aab43d9fca63f2ad0c3351da30066d0b72bf3dec3a9584dfac0fd50dba57a0 WatchSource:0}: Error finding container c5aab43d9fca63f2ad0c3351da30066d0b72bf3dec3a9584dfac0fd50dba57a0: Status 404 returned error can't find the container with id c5aab43d9fca63f2ad0c3351da30066d0b72bf3dec3a9584dfac0fd50dba57a0 Jan 31 04:00:52 crc kubenswrapper[4827]: I0131 04:00:52.118248 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k2dpj" event={"ID":"02572d7b-fb27-4201-8c5d-f5fabbc94fee","Type":"ContainerStarted","Data":"c5aab43d9fca63f2ad0c3351da30066d0b72bf3dec3a9584dfac0fd50dba57a0"} Jan 31 04:00:54 crc kubenswrapper[4827]: I0131 04:00:54.663265 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-k2dpj"] Jan 31 04:00:55 crc kubenswrapper[4827]: I0131 04:00:55.131384 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k2dpj" event={"ID":"02572d7b-fb27-4201-8c5d-f5fabbc94fee","Type":"ContainerStarted","Data":"7a8b7ccd3372328136dec2c37af3b61bc1b014b89d2818758a85a4de5b31c7eb"} Jan 31 04:00:55 crc kubenswrapper[4827]: I0131 04:00:55.131574 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-k2dpj" podUID="02572d7b-fb27-4201-8c5d-f5fabbc94fee" containerName="registry-server" containerID="cri-o://7a8b7ccd3372328136dec2c37af3b61bc1b014b89d2818758a85a4de5b31c7eb" gracePeriod=2 Jan 31 04:00:55 crc kubenswrapper[4827]: I0131 04:00:55.152667 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-k2dpj" podStartSLOduration=1.167013051 podStartE2EDuration="4.152632856s" podCreationTimestamp="2026-01-31 04:00:51 +0000 UTC" firstStartedPulling="2026-01-31 04:00:51.921156786 +0000 UTC m=+844.608237225" lastFinishedPulling="2026-01-31 04:00:54.906776581 +0000 UTC m=+847.593857030" observedRunningTime="2026-01-31 04:00:55.147965842 +0000 UTC m=+847.835046301" watchObservedRunningTime="2026-01-31 04:00:55.152632856 +0000 UTC m=+847.839713305" Jan 31 04:00:55 crc kubenswrapper[4827]: I0131 04:00:55.272648 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9l7pz"] Jan 31 04:00:55 crc kubenswrapper[4827]: I0131 04:00:55.274740 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9l7pz" Jan 31 04:00:55 crc kubenswrapper[4827]: I0131 04:00:55.285053 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9l7pz"] Jan 31 04:00:55 crc kubenswrapper[4827]: I0131 04:00:55.468538 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-945j5\" (UniqueName: \"kubernetes.io/projected/2ddccb17-c139-46fa-a62e-efdc15bbab1b-kube-api-access-945j5\") pod \"openstack-operator-index-9l7pz\" (UID: \"2ddccb17-c139-46fa-a62e-efdc15bbab1b\") " pod="openstack-operators/openstack-operator-index-9l7pz" Jan 31 04:00:55 crc kubenswrapper[4827]: I0131 04:00:55.570210 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-945j5\" (UniqueName: \"kubernetes.io/projected/2ddccb17-c139-46fa-a62e-efdc15bbab1b-kube-api-access-945j5\") pod \"openstack-operator-index-9l7pz\" (UID: \"2ddccb17-c139-46fa-a62e-efdc15bbab1b\") " pod="openstack-operators/openstack-operator-index-9l7pz" Jan 31 04:00:55 crc kubenswrapper[4827]: I0131 04:00:55.576264 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k2dpj" Jan 31 04:00:55 crc kubenswrapper[4827]: I0131 04:00:55.611157 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-945j5\" (UniqueName: \"kubernetes.io/projected/2ddccb17-c139-46fa-a62e-efdc15bbab1b-kube-api-access-945j5\") pod \"openstack-operator-index-9l7pz\" (UID: \"2ddccb17-c139-46fa-a62e-efdc15bbab1b\") " pod="openstack-operators/openstack-operator-index-9l7pz" Jan 31 04:00:55 crc kubenswrapper[4827]: I0131 04:00:55.617914 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9l7pz" Jan 31 04:00:55 crc kubenswrapper[4827]: I0131 04:00:55.671485 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwkfb\" (UniqueName: \"kubernetes.io/projected/02572d7b-fb27-4201-8c5d-f5fabbc94fee-kube-api-access-wwkfb\") pod \"02572d7b-fb27-4201-8c5d-f5fabbc94fee\" (UID: \"02572d7b-fb27-4201-8c5d-f5fabbc94fee\") " Jan 31 04:00:55 crc kubenswrapper[4827]: I0131 04:00:55.674854 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02572d7b-fb27-4201-8c5d-f5fabbc94fee-kube-api-access-wwkfb" (OuterVolumeSpecName: "kube-api-access-wwkfb") pod "02572d7b-fb27-4201-8c5d-f5fabbc94fee" (UID: "02572d7b-fb27-4201-8c5d-f5fabbc94fee"). InnerVolumeSpecName "kube-api-access-wwkfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:00:55 crc kubenswrapper[4827]: I0131 04:00:55.773175 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwkfb\" (UniqueName: \"kubernetes.io/projected/02572d7b-fb27-4201-8c5d-f5fabbc94fee-kube-api-access-wwkfb\") on node \"crc\" DevicePath \"\"" Jan 31 04:00:55 crc kubenswrapper[4827]: I0131 04:00:55.840786 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9l7pz"] Jan 31 04:00:55 crc kubenswrapper[4827]: W0131 04:00:55.846605 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ddccb17_c139_46fa_a62e_efdc15bbab1b.slice/crio-fe1e571323ea0b7f1bd5e6ff1640485469ebe85737b8943085ce4496763d4334 WatchSource:0}: Error finding container fe1e571323ea0b7f1bd5e6ff1640485469ebe85737b8943085ce4496763d4334: Status 404 returned error can't find the container with id fe1e571323ea0b7f1bd5e6ff1640485469ebe85737b8943085ce4496763d4334 Jan 31 04:00:56 crc kubenswrapper[4827]: I0131 04:00:56.139037 4827 generic.go:334] "Generic (PLEG): container finished" podID="02572d7b-fb27-4201-8c5d-f5fabbc94fee" containerID="7a8b7ccd3372328136dec2c37af3b61bc1b014b89d2818758a85a4de5b31c7eb" exitCode=0 Jan 31 04:00:56 crc kubenswrapper[4827]: I0131 04:00:56.139154 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k2dpj" Jan 31 04:00:56 crc kubenswrapper[4827]: I0131 04:00:56.139145 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k2dpj" event={"ID":"02572d7b-fb27-4201-8c5d-f5fabbc94fee","Type":"ContainerDied","Data":"7a8b7ccd3372328136dec2c37af3b61bc1b014b89d2818758a85a4de5b31c7eb"} Jan 31 04:00:56 crc kubenswrapper[4827]: I0131 04:00:56.139290 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k2dpj" event={"ID":"02572d7b-fb27-4201-8c5d-f5fabbc94fee","Type":"ContainerDied","Data":"c5aab43d9fca63f2ad0c3351da30066d0b72bf3dec3a9584dfac0fd50dba57a0"} Jan 31 04:00:56 crc kubenswrapper[4827]: I0131 04:00:56.139311 4827 scope.go:117] "RemoveContainer" containerID="7a8b7ccd3372328136dec2c37af3b61bc1b014b89d2818758a85a4de5b31c7eb" Jan 31 04:00:56 crc kubenswrapper[4827]: I0131 04:00:56.143712 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9l7pz" event={"ID":"2ddccb17-c139-46fa-a62e-efdc15bbab1b","Type":"ContainerStarted","Data":"9133de30a824c12889f6411297f133c40d25081a5e97bc8bd8efd10a663222ab"} Jan 31 04:00:56 crc kubenswrapper[4827]: I0131 04:00:56.143747 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9l7pz" event={"ID":"2ddccb17-c139-46fa-a62e-efdc15bbab1b","Type":"ContainerStarted","Data":"fe1e571323ea0b7f1bd5e6ff1640485469ebe85737b8943085ce4496763d4334"} Jan 31 04:00:56 crc kubenswrapper[4827]: I0131 04:00:56.163396 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-k2dpj"] Jan 31 04:00:56 crc kubenswrapper[4827]: I0131 04:00:56.164713 4827 scope.go:117] "RemoveContainer" containerID="7a8b7ccd3372328136dec2c37af3b61bc1b014b89d2818758a85a4de5b31c7eb" Jan 31 04:00:56 crc kubenswrapper[4827]: E0131 04:00:56.166868 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a8b7ccd3372328136dec2c37af3b61bc1b014b89d2818758a85a4de5b31c7eb\": container with ID starting with 7a8b7ccd3372328136dec2c37af3b61bc1b014b89d2818758a85a4de5b31c7eb not found: ID does not exist" containerID="7a8b7ccd3372328136dec2c37af3b61bc1b014b89d2818758a85a4de5b31c7eb" Jan 31 04:00:56 crc kubenswrapper[4827]: I0131 04:00:56.166925 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a8b7ccd3372328136dec2c37af3b61bc1b014b89d2818758a85a4de5b31c7eb"} err="failed to get container status \"7a8b7ccd3372328136dec2c37af3b61bc1b014b89d2818758a85a4de5b31c7eb\": rpc error: code = NotFound desc = could not find container \"7a8b7ccd3372328136dec2c37af3b61bc1b014b89d2818758a85a4de5b31c7eb\": container with ID starting with 7a8b7ccd3372328136dec2c37af3b61bc1b014b89d2818758a85a4de5b31c7eb not found: ID does not exist" Jan 31 04:00:56 crc kubenswrapper[4827]: I0131 04:00:56.167520 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-k2dpj"] Jan 31 04:00:56 crc kubenswrapper[4827]: I0131 04:00:56.176681 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9l7pz" podStartSLOduration=1.132015735 podStartE2EDuration="1.176662493s" podCreationTimestamp="2026-01-31 04:00:55 +0000 UTC" firstStartedPulling="2026-01-31 04:00:55.86644572 +0000 UTC m=+848.553526169" lastFinishedPulling="2026-01-31 04:00:55.911092478 +0000 UTC m=+848.598172927" observedRunningTime="2026-01-31 04:00:56.174286228 +0000 UTC m=+848.861366697" watchObservedRunningTime="2026-01-31 04:00:56.176662493 +0000 UTC m=+848.863742952" Jan 31 04:00:56 crc kubenswrapper[4827]: I0131 04:00:56.748873 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4q2p4" Jan 31 04:00:56 crc kubenswrapper[4827]: I0131 04:00:56.846865 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-lxksh" Jan 31 04:00:58 crc kubenswrapper[4827]: I0131 04:00:58.120817 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02572d7b-fb27-4201-8c5d-f5fabbc94fee" path="/var/lib/kubelet/pods/02572d7b-fb27-4201-8c5d-f5fabbc94fee/volumes" Jan 31 04:01:05 crc kubenswrapper[4827]: I0131 04:01:05.618960 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-9l7pz" Jan 31 04:01:05 crc kubenswrapper[4827]: I0131 04:01:05.619610 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-9l7pz" Jan 31 04:01:05 crc kubenswrapper[4827]: I0131 04:01:05.653819 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-9l7pz" Jan 31 04:01:06 crc kubenswrapper[4827]: I0131 04:01:06.246613 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-9l7pz" Jan 31 04:01:06 crc kubenswrapper[4827]: I0131 04:01:06.730203 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-nd7t7" Jan 31 04:01:07 crc kubenswrapper[4827]: I0131 04:01:07.311303 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9"] Jan 31 04:01:07 crc kubenswrapper[4827]: E0131 04:01:07.311733 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02572d7b-fb27-4201-8c5d-f5fabbc94fee" containerName="registry-server" Jan 31 04:01:07 crc kubenswrapper[4827]: I0131 04:01:07.311762 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="02572d7b-fb27-4201-8c5d-f5fabbc94fee" containerName="registry-server" Jan 31 04:01:07 crc kubenswrapper[4827]: I0131 04:01:07.312029 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="02572d7b-fb27-4201-8c5d-f5fabbc94fee" containerName="registry-server" Jan 31 04:01:07 crc kubenswrapper[4827]: I0131 04:01:07.313538 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9" Jan 31 04:01:07 crc kubenswrapper[4827]: I0131 04:01:07.316552 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-nbs8m" Jan 31 04:01:07 crc kubenswrapper[4827]: I0131 04:01:07.321086 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9"] Jan 31 04:01:07 crc kubenswrapper[4827]: I0131 04:01:07.364616 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00650213-91a7-4da4-956e-500845f8ec0d-bundle\") pod \"99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9\" (UID: \"00650213-91a7-4da4-956e-500845f8ec0d\") " pod="openstack-operators/99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9" Jan 31 04:01:07 crc kubenswrapper[4827]: I0131 04:01:07.364808 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6wz9\" (UniqueName: \"kubernetes.io/projected/00650213-91a7-4da4-956e-500845f8ec0d-kube-api-access-r6wz9\") pod \"99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9\" (UID: \"00650213-91a7-4da4-956e-500845f8ec0d\") " pod="openstack-operators/99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9" Jan 31 04:01:07 crc kubenswrapper[4827]: I0131 04:01:07.364926 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00650213-91a7-4da4-956e-500845f8ec0d-util\") pod \"99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9\" (UID: \"00650213-91a7-4da4-956e-500845f8ec0d\") " pod="openstack-operators/99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9" Jan 31 04:01:07 crc kubenswrapper[4827]: I0131 04:01:07.466659 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00650213-91a7-4da4-956e-500845f8ec0d-bundle\") pod \"99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9\" (UID: \"00650213-91a7-4da4-956e-500845f8ec0d\") " pod="openstack-operators/99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9" Jan 31 04:01:07 crc kubenswrapper[4827]: I0131 04:01:07.466717 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6wz9\" (UniqueName: \"kubernetes.io/projected/00650213-91a7-4da4-956e-500845f8ec0d-kube-api-access-r6wz9\") pod \"99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9\" (UID: \"00650213-91a7-4da4-956e-500845f8ec0d\") " pod="openstack-operators/99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9" Jan 31 04:01:07 crc kubenswrapper[4827]: I0131 04:01:07.466765 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00650213-91a7-4da4-956e-500845f8ec0d-util\") pod \"99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9\" (UID: \"00650213-91a7-4da4-956e-500845f8ec0d\") " pod="openstack-operators/99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9" Jan 31 04:01:07 crc kubenswrapper[4827]: I0131 04:01:07.467220 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00650213-91a7-4da4-956e-500845f8ec0d-bundle\") pod \"99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9\" (UID: \"00650213-91a7-4da4-956e-500845f8ec0d\") " pod="openstack-operators/99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9" Jan 31 04:01:07 crc kubenswrapper[4827]: I0131 04:01:07.467254 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00650213-91a7-4da4-956e-500845f8ec0d-util\") pod \"99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9\" (UID: \"00650213-91a7-4da4-956e-500845f8ec0d\") " pod="openstack-operators/99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9" Jan 31 04:01:07 crc kubenswrapper[4827]: I0131 04:01:07.492807 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6wz9\" (UniqueName: \"kubernetes.io/projected/00650213-91a7-4da4-956e-500845f8ec0d-kube-api-access-r6wz9\") pod \"99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9\" (UID: \"00650213-91a7-4da4-956e-500845f8ec0d\") " pod="openstack-operators/99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9" Jan 31 04:01:07 crc kubenswrapper[4827]: I0131 04:01:07.631853 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9" Jan 31 04:01:07 crc kubenswrapper[4827]: I0131 04:01:07.829168 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9"] Jan 31 04:01:07 crc kubenswrapper[4827]: W0131 04:01:07.839088 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00650213_91a7_4da4_956e_500845f8ec0d.slice/crio-4ccc70b5ab46108b9bdb41fe506a64a6b8960dc0aef81a8c060aeb4b2443edda WatchSource:0}: Error finding container 4ccc70b5ab46108b9bdb41fe506a64a6b8960dc0aef81a8c060aeb4b2443edda: Status 404 returned error can't find the container with id 4ccc70b5ab46108b9bdb41fe506a64a6b8960dc0aef81a8c060aeb4b2443edda Jan 31 04:01:08 crc kubenswrapper[4827]: I0131 04:01:08.227491 4827 generic.go:334] "Generic (PLEG): container finished" podID="00650213-91a7-4da4-956e-500845f8ec0d" containerID="95e63b891f85856ff66fbb536c8691ac15ed7d82220b14ed5415233e4b3c0da9" exitCode=0 Jan 31 04:01:08 crc kubenswrapper[4827]: I0131 04:01:08.227528 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9" event={"ID":"00650213-91a7-4da4-956e-500845f8ec0d","Type":"ContainerDied","Data":"95e63b891f85856ff66fbb536c8691ac15ed7d82220b14ed5415233e4b3c0da9"} Jan 31 04:01:08 crc kubenswrapper[4827]: I0131 04:01:08.227550 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9" event={"ID":"00650213-91a7-4da4-956e-500845f8ec0d","Type":"ContainerStarted","Data":"4ccc70b5ab46108b9bdb41fe506a64a6b8960dc0aef81a8c060aeb4b2443edda"} Jan 31 04:01:09 crc kubenswrapper[4827]: I0131 04:01:09.252722 4827 generic.go:334] "Generic (PLEG): container finished" podID="00650213-91a7-4da4-956e-500845f8ec0d" containerID="d050ca63debcefa4e76aa7b3cb7e33f281c479b1ab6a5c285d6bc9216fda0396" exitCode=0 Jan 31 04:01:09 crc kubenswrapper[4827]: I0131 04:01:09.252838 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9" event={"ID":"00650213-91a7-4da4-956e-500845f8ec0d","Type":"ContainerDied","Data":"d050ca63debcefa4e76aa7b3cb7e33f281c479b1ab6a5c285d6bc9216fda0396"} Jan 31 04:01:10 crc kubenswrapper[4827]: I0131 04:01:10.265678 4827 generic.go:334] "Generic (PLEG): container finished" podID="00650213-91a7-4da4-956e-500845f8ec0d" containerID="625b599c855af7ee9ac8f9bd1e9783081dff1d46da724efdb13dc58955e600ae" exitCode=0 Jan 31 04:01:10 crc kubenswrapper[4827]: I0131 04:01:10.265734 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9" event={"ID":"00650213-91a7-4da4-956e-500845f8ec0d","Type":"ContainerDied","Data":"625b599c855af7ee9ac8f9bd1e9783081dff1d46da724efdb13dc58955e600ae"} Jan 31 04:01:11 crc kubenswrapper[4827]: I0131 04:01:11.547064 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9" Jan 31 04:01:11 crc kubenswrapper[4827]: I0131 04:01:11.620729 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6wz9\" (UniqueName: \"kubernetes.io/projected/00650213-91a7-4da4-956e-500845f8ec0d-kube-api-access-r6wz9\") pod \"00650213-91a7-4da4-956e-500845f8ec0d\" (UID: \"00650213-91a7-4da4-956e-500845f8ec0d\") " Jan 31 04:01:11 crc kubenswrapper[4827]: I0131 04:01:11.621021 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00650213-91a7-4da4-956e-500845f8ec0d-bundle\") pod \"00650213-91a7-4da4-956e-500845f8ec0d\" (UID: \"00650213-91a7-4da4-956e-500845f8ec0d\") " Jan 31 04:01:11 crc kubenswrapper[4827]: I0131 04:01:11.621072 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00650213-91a7-4da4-956e-500845f8ec0d-util\") pod \"00650213-91a7-4da4-956e-500845f8ec0d\" (UID: \"00650213-91a7-4da4-956e-500845f8ec0d\") " Jan 31 04:01:11 crc kubenswrapper[4827]: I0131 04:01:11.621817 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00650213-91a7-4da4-956e-500845f8ec0d-bundle" (OuterVolumeSpecName: "bundle") pod "00650213-91a7-4da4-956e-500845f8ec0d" (UID: "00650213-91a7-4da4-956e-500845f8ec0d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:01:11 crc kubenswrapper[4827]: I0131 04:01:11.627380 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00650213-91a7-4da4-956e-500845f8ec0d-kube-api-access-r6wz9" (OuterVolumeSpecName: "kube-api-access-r6wz9") pod "00650213-91a7-4da4-956e-500845f8ec0d" (UID: "00650213-91a7-4da4-956e-500845f8ec0d"). InnerVolumeSpecName "kube-api-access-r6wz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:01:11 crc kubenswrapper[4827]: I0131 04:01:11.638307 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00650213-91a7-4da4-956e-500845f8ec0d-util" (OuterVolumeSpecName: "util") pod "00650213-91a7-4da4-956e-500845f8ec0d" (UID: "00650213-91a7-4da4-956e-500845f8ec0d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:01:11 crc kubenswrapper[4827]: I0131 04:01:11.722904 4827 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00650213-91a7-4da4-956e-500845f8ec0d-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:01:11 crc kubenswrapper[4827]: I0131 04:01:11.722951 4827 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00650213-91a7-4da4-956e-500845f8ec0d-util\") on node \"crc\" DevicePath \"\"" Jan 31 04:01:11 crc kubenswrapper[4827]: I0131 04:01:11.722974 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6wz9\" (UniqueName: \"kubernetes.io/projected/00650213-91a7-4da4-956e-500845f8ec0d-kube-api-access-r6wz9\") on node \"crc\" DevicePath \"\"" Jan 31 04:01:12 crc kubenswrapper[4827]: I0131 04:01:12.282863 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9" event={"ID":"00650213-91a7-4da4-956e-500845f8ec0d","Type":"ContainerDied","Data":"4ccc70b5ab46108b9bdb41fe506a64a6b8960dc0aef81a8c060aeb4b2443edda"} Jan 31 04:01:12 crc kubenswrapper[4827]: I0131 04:01:12.282939 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ccc70b5ab46108b9bdb41fe506a64a6b8960dc0aef81a8c060aeb4b2443edda" Jan 31 04:01:12 crc kubenswrapper[4827]: I0131 04:01:12.282984 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9" Jan 31 04:01:19 crc kubenswrapper[4827]: I0131 04:01:19.925195 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-68ffdbb6cf-tmt7z"] Jan 31 04:01:19 crc kubenswrapper[4827]: E0131 04:01:19.925813 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00650213-91a7-4da4-956e-500845f8ec0d" containerName="extract" Jan 31 04:01:19 crc kubenswrapper[4827]: I0131 04:01:19.925828 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="00650213-91a7-4da4-956e-500845f8ec0d" containerName="extract" Jan 31 04:01:19 crc kubenswrapper[4827]: E0131 04:01:19.925841 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00650213-91a7-4da4-956e-500845f8ec0d" containerName="util" Jan 31 04:01:19 crc kubenswrapper[4827]: I0131 04:01:19.925848 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="00650213-91a7-4da4-956e-500845f8ec0d" containerName="util" Jan 31 04:01:19 crc kubenswrapper[4827]: E0131 04:01:19.925867 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00650213-91a7-4da4-956e-500845f8ec0d" containerName="pull" Jan 31 04:01:19 crc kubenswrapper[4827]: I0131 04:01:19.925874 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="00650213-91a7-4da4-956e-500845f8ec0d" containerName="pull" Jan 31 04:01:19 crc kubenswrapper[4827]: I0131 04:01:19.926023 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="00650213-91a7-4da4-956e-500845f8ec0d" containerName="extract" Jan 31 04:01:19 crc kubenswrapper[4827]: I0131 04:01:19.926512 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-68ffdbb6cf-tmt7z" Jan 31 04:01:19 crc kubenswrapper[4827]: I0131 04:01:19.929991 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-db2s7" Jan 31 04:01:19 crc kubenswrapper[4827]: I0131 04:01:19.957339 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-68ffdbb6cf-tmt7z"] Jan 31 04:01:20 crc kubenswrapper[4827]: I0131 04:01:20.041640 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqgns\" (UniqueName: \"kubernetes.io/projected/062d81b0-3054-4387-9b68-716c6b57c850-kube-api-access-lqgns\") pod \"openstack-operator-controller-init-68ffdbb6cf-tmt7z\" (UID: \"062d81b0-3054-4387-9b68-716c6b57c850\") " pod="openstack-operators/openstack-operator-controller-init-68ffdbb6cf-tmt7z" Jan 31 04:01:20 crc kubenswrapper[4827]: I0131 04:01:20.143232 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqgns\" (UniqueName: \"kubernetes.io/projected/062d81b0-3054-4387-9b68-716c6b57c850-kube-api-access-lqgns\") pod \"openstack-operator-controller-init-68ffdbb6cf-tmt7z\" (UID: \"062d81b0-3054-4387-9b68-716c6b57c850\") " pod="openstack-operators/openstack-operator-controller-init-68ffdbb6cf-tmt7z" Jan 31 04:01:20 crc kubenswrapper[4827]: I0131 04:01:20.167294 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqgns\" (UniqueName: \"kubernetes.io/projected/062d81b0-3054-4387-9b68-716c6b57c850-kube-api-access-lqgns\") pod \"openstack-operator-controller-init-68ffdbb6cf-tmt7z\" (UID: \"062d81b0-3054-4387-9b68-716c6b57c850\") " pod="openstack-operators/openstack-operator-controller-init-68ffdbb6cf-tmt7z" Jan 31 04:01:20 crc kubenswrapper[4827]: I0131 04:01:20.247202 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-68ffdbb6cf-tmt7z" Jan 31 04:01:20 crc kubenswrapper[4827]: I0131 04:01:20.652982 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-68ffdbb6cf-tmt7z"] Jan 31 04:01:21 crc kubenswrapper[4827]: I0131 04:01:21.346301 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-68ffdbb6cf-tmt7z" event={"ID":"062d81b0-3054-4387-9b68-716c6b57c850","Type":"ContainerStarted","Data":"3c346f5bb9ecb3f30646706811f8e48bcb87898ee8707cd169c1eb4db4b73925"} Jan 31 04:01:24 crc kubenswrapper[4827]: I0131 04:01:24.366432 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-68ffdbb6cf-tmt7z" event={"ID":"062d81b0-3054-4387-9b68-716c6b57c850","Type":"ContainerStarted","Data":"b8fdc0c777694ca076da4b0e386985c94c05309b864d9698a9b79d9a704790c4"} Jan 31 04:01:24 crc kubenswrapper[4827]: I0131 04:01:24.366810 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-68ffdbb6cf-tmt7z" Jan 31 04:01:24 crc kubenswrapper[4827]: I0131 04:01:24.397071 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-68ffdbb6cf-tmt7z" podStartSLOduration=1.996538908 podStartE2EDuration="5.397053619s" podCreationTimestamp="2026-01-31 04:01:19 +0000 UTC" firstStartedPulling="2026-01-31 04:01:20.662492862 +0000 UTC m=+873.349573311" lastFinishedPulling="2026-01-31 04:01:24.063007573 +0000 UTC m=+876.750088022" observedRunningTime="2026-01-31 04:01:24.396748749 +0000 UTC m=+877.083829248" watchObservedRunningTime="2026-01-31 04:01:24.397053619 +0000 UTC m=+877.084134068" Jan 31 04:01:30 crc kubenswrapper[4827]: I0131 04:01:30.250158 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-68ffdbb6cf-tmt7z" Jan 31 04:01:47 crc kubenswrapper[4827]: I0131 04:01:47.371164 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:01:47 crc kubenswrapper[4827]: I0131 04:01:47.371737 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.838025 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k469j"] Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.839151 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k469j" Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.846622 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-t7blf" Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.865476 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k469j"] Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.869389 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7489d7c99b-75s7f"] Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.870157 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7489d7c99b-75s7f" Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.871982 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-67hj2" Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.886661 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-9k4dq"] Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.887646 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-9k4dq" Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.890716 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-w7fpt" Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.895613 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7489d7c99b-75s7f"] Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.901462 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-567vg\" (UniqueName: \"kubernetes.io/projected/1ee58492-27e7-446f-84c8-c3b0b74884fa-kube-api-access-567vg\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-k469j\" (UID: \"1ee58492-27e7-446f-84c8-c3b0b74884fa\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k469j" Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.927842 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-zdtlh"] Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.929377 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-zdtlh" Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.932469 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-9k4dq"] Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.937703 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-zdtlh"] Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.941158 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-c8sm5" Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.951417 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-hprpc"] Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.952260 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hprpc" Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.955990 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-jpzwl" Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.982851 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-wwvbx"] Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.984395 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wwvbx" Jan 31 04:01:49 crc kubenswrapper[4827]: I0131 04:01:49.988447 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-jcgdb" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.007189 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2b2r\" (UniqueName: \"kubernetes.io/projected/c0c17a5a-5f0d-421e-b29c-56c4f2626a7b-kube-api-access-x2b2r\") pod \"glance-operator-controller-manager-8886f4c47-zdtlh\" (UID: \"c0c17a5a-5f0d-421e-b29c-56c4f2626a7b\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-zdtlh" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.007263 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btxhn\" (UniqueName: \"kubernetes.io/projected/60792734-916b-4bb7-a17f-45a03be036c8-kube-api-access-btxhn\") pod \"designate-operator-controller-manager-6d9697b7f4-9k4dq\" (UID: \"60792734-916b-4bb7-a17f-45a03be036c8\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-9k4dq" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.007290 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-567vg\" (UniqueName: \"kubernetes.io/projected/1ee58492-27e7-446f-84c8-c3b0b74884fa-kube-api-access-567vg\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-k469j\" (UID: \"1ee58492-27e7-446f-84c8-c3b0b74884fa\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k469j" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.015229 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jcwb\" (UniqueName: \"kubernetes.io/projected/74e68a52-8f24-4ff0-a160-8a1ad61238c9-kube-api-access-6jcwb\") pod \"cinder-operator-controller-manager-7489d7c99b-75s7f\" (UID: \"74e68a52-8f24-4ff0-a160-8a1ad61238c9\") " pod="openstack-operators/cinder-operator-controller-manager-7489d7c99b-75s7f" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.051938 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-hprpc"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.059462 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-567vg\" (UniqueName: \"kubernetes.io/projected/1ee58492-27e7-446f-84c8-c3b0b74884fa-kube-api-access-567vg\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-k469j\" (UID: \"1ee58492-27e7-446f-84c8-c3b0b74884fa\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k469j" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.063671 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-wwvbx"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.072952 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-gcs7k"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.073941 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-gcs7k" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.077188 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-r2ljw"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.078439 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-r2ljw" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.080616 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-8hvrl"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.081771 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8hvrl" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.082777 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-j47pw" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.094172 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-tqhjk" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.094443 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.097958 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-9glm4" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.118938 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2b2r\" (UniqueName: \"kubernetes.io/projected/c0c17a5a-5f0d-421e-b29c-56c4f2626a7b-kube-api-access-x2b2r\") pod \"glance-operator-controller-manager-8886f4c47-zdtlh\" (UID: \"c0c17a5a-5f0d-421e-b29c-56c4f2626a7b\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-zdtlh" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.118998 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btxhn\" (UniqueName: \"kubernetes.io/projected/60792734-916b-4bb7-a17f-45a03be036c8-kube-api-access-btxhn\") pod \"designate-operator-controller-manager-6d9697b7f4-9k4dq\" (UID: \"60792734-916b-4bb7-a17f-45a03be036c8\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-9k4dq" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.119037 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d68zr\" (UniqueName: \"kubernetes.io/projected/bbf882c7-842b-46eb-a459-bb628db2598f-kube-api-access-d68zr\") pod \"horizon-operator-controller-manager-5fb775575f-wwvbx\" (UID: \"bbf882c7-842b-46eb-a459-bb628db2598f\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wwvbx" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.119075 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67s5p\" (UniqueName: \"kubernetes.io/projected/fe50fb01-1097-4ac9-81ae-fdfc96842f68-kube-api-access-67s5p\") pod \"heat-operator-controller-manager-69d6db494d-hprpc\" (UID: \"fe50fb01-1097-4ac9-81ae-fdfc96842f68\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hprpc" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.119139 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jcwb\" (UniqueName: \"kubernetes.io/projected/74e68a52-8f24-4ff0-a160-8a1ad61238c9-kube-api-access-6jcwb\") pod \"cinder-operator-controller-manager-7489d7c99b-75s7f\" (UID: \"74e68a52-8f24-4ff0-a160-8a1ad61238c9\") " pod="openstack-operators/cinder-operator-controller-manager-7489d7c99b-75s7f" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.132260 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-gcs7k"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.154699 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k469j" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.159934 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-8hvrl"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.163610 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2b2r\" (UniqueName: \"kubernetes.io/projected/c0c17a5a-5f0d-421e-b29c-56c4f2626a7b-kube-api-access-x2b2r\") pod \"glance-operator-controller-manager-8886f4c47-zdtlh\" (UID: \"c0c17a5a-5f0d-421e-b29c-56c4f2626a7b\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-zdtlh" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.172006 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btxhn\" (UniqueName: \"kubernetes.io/projected/60792734-916b-4bb7-a17f-45a03be036c8-kube-api-access-btxhn\") pod \"designate-operator-controller-manager-6d9697b7f4-9k4dq\" (UID: \"60792734-916b-4bb7-a17f-45a03be036c8\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-9k4dq" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.186723 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jcwb\" (UniqueName: \"kubernetes.io/projected/74e68a52-8f24-4ff0-a160-8a1ad61238c9-kube-api-access-6jcwb\") pod \"cinder-operator-controller-manager-7489d7c99b-75s7f\" (UID: \"74e68a52-8f24-4ff0-a160-8a1ad61238c9\") " pod="openstack-operators/cinder-operator-controller-manager-7489d7c99b-75s7f" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.190243 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7489d7c99b-75s7f" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.208348 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-9k4dq" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.220360 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d68zr\" (UniqueName: \"kubernetes.io/projected/bbf882c7-842b-46eb-a459-bb628db2598f-kube-api-access-d68zr\") pod \"horizon-operator-controller-manager-5fb775575f-wwvbx\" (UID: \"bbf882c7-842b-46eb-a459-bb628db2598f\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wwvbx" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.220415 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00f00c32-1e04-42e4-95b4-923c6b57386e-cert\") pod \"infra-operator-controller-manager-79955696d6-gcs7k\" (UID: \"00f00c32-1e04-42e4-95b4-923c6b57386e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-gcs7k" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.220449 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67s5p\" (UniqueName: \"kubernetes.io/projected/fe50fb01-1097-4ac9-81ae-fdfc96842f68-kube-api-access-67s5p\") pod \"heat-operator-controller-manager-69d6db494d-hprpc\" (UID: \"fe50fb01-1097-4ac9-81ae-fdfc96842f68\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hprpc" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.220523 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh5xs\" (UniqueName: \"kubernetes.io/projected/efcd65a1-b55c-4cf6-bfe7-5e888e2bc7f0-kube-api-access-mh5xs\") pod \"keystone-operator-controller-manager-84f48565d4-8hvrl\" (UID: \"efcd65a1-b55c-4cf6-bfe7-5e888e2bc7f0\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8hvrl" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.220546 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l2cc\" (UniqueName: \"kubernetes.io/projected/00f00c32-1e04-42e4-95b4-923c6b57386e-kube-api-access-8l2cc\") pod \"infra-operator-controller-manager-79955696d6-gcs7k\" (UID: \"00f00c32-1e04-42e4-95b4-923c6b57386e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-gcs7k" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.220588 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qblnc\" (UniqueName: \"kubernetes.io/projected/adfd32af-9db4-468a-bac1-d33f11930922-kube-api-access-qblnc\") pod \"ironic-operator-controller-manager-5f4b8bd54d-r2ljw\" (UID: \"adfd32af-9db4-468a-bac1-d33f11930922\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-r2ljw" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.221439 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-r2ljw"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.236138 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-2z575"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.237318 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2z575" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.250230 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-pkgw8" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.285992 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-zdtlh" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.298964 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-2z575"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.302745 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67s5p\" (UniqueName: \"kubernetes.io/projected/fe50fb01-1097-4ac9-81ae-fdfc96842f68-kube-api-access-67s5p\") pod \"heat-operator-controller-manager-69d6db494d-hprpc\" (UID: \"fe50fb01-1097-4ac9-81ae-fdfc96842f68\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hprpc" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.316221 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-dvj6j"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.320269 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hprpc" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.321060 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-dvj6j" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.321350 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00f00c32-1e04-42e4-95b4-923c6b57386e-cert\") pod \"infra-operator-controller-manager-79955696d6-gcs7k\" (UID: \"00f00c32-1e04-42e4-95b4-923c6b57386e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-gcs7k" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.321417 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh5xs\" (UniqueName: \"kubernetes.io/projected/efcd65a1-b55c-4cf6-bfe7-5e888e2bc7f0-kube-api-access-mh5xs\") pod \"keystone-operator-controller-manager-84f48565d4-8hvrl\" (UID: \"efcd65a1-b55c-4cf6-bfe7-5e888e2bc7f0\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8hvrl" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.321452 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l2cc\" (UniqueName: \"kubernetes.io/projected/00f00c32-1e04-42e4-95b4-923c6b57386e-kube-api-access-8l2cc\") pod \"infra-operator-controller-manager-79955696d6-gcs7k\" (UID: \"00f00c32-1e04-42e4-95b4-923c6b57386e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-gcs7k" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.321495 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qblnc\" (UniqueName: \"kubernetes.io/projected/adfd32af-9db4-468a-bac1-d33f11930922-kube-api-access-qblnc\") pod \"ironic-operator-controller-manager-5f4b8bd54d-r2ljw\" (UID: \"adfd32af-9db4-468a-bac1-d33f11930922\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-r2ljw" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.321517 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6dxf\" (UniqueName: \"kubernetes.io/projected/fe5adffe-e198-4d4f-815d-02333b3a1853-kube-api-access-x6dxf\") pod \"manila-operator-controller-manager-7dd968899f-2z575\" (UID: \"fe5adffe-e198-4d4f-815d-02333b3a1853\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2z575" Jan 31 04:01:50 crc kubenswrapper[4827]: E0131 04:01:50.321786 4827 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 04:01:50 crc kubenswrapper[4827]: E0131 04:01:50.321830 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00f00c32-1e04-42e4-95b4-923c6b57386e-cert podName:00f00c32-1e04-42e4-95b4-923c6b57386e nodeName:}" failed. No retries permitted until 2026-01-31 04:01:50.821815847 +0000 UTC m=+903.508896296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/00f00c32-1e04-42e4-95b4-923c6b57386e-cert") pod "infra-operator-controller-manager-79955696d6-gcs7k" (UID: "00f00c32-1e04-42e4-95b4-923c6b57386e") : secret "infra-operator-webhook-server-cert" not found Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.322716 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d68zr\" (UniqueName: \"kubernetes.io/projected/bbf882c7-842b-46eb-a459-bb628db2598f-kube-api-access-d68zr\") pod \"horizon-operator-controller-manager-5fb775575f-wwvbx\" (UID: \"bbf882c7-842b-46eb-a459-bb628db2598f\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wwvbx" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.332957 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-wdrl7"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.333772 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdrl7" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.342017 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wwvbx" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.347188 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-5hw52" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.347455 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-mmntr" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.357228 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-dvj6j"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.363774 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qblnc\" (UniqueName: \"kubernetes.io/projected/adfd32af-9db4-468a-bac1-d33f11930922-kube-api-access-qblnc\") pod \"ironic-operator-controller-manager-5f4b8bd54d-r2ljw\" (UID: \"adfd32af-9db4-468a-bac1-d33f11930922\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-r2ljw" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.392542 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l2cc\" (UniqueName: \"kubernetes.io/projected/00f00c32-1e04-42e4-95b4-923c6b57386e-kube-api-access-8l2cc\") pod \"infra-operator-controller-manager-79955696d6-gcs7k\" (UID: \"00f00c32-1e04-42e4-95b4-923c6b57386e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-gcs7k" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.392622 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-wdrl7"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.416205 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-4snkb"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.417112 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4snkb" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.419115 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-rjtpm" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.422379 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh5xs\" (UniqueName: \"kubernetes.io/projected/efcd65a1-b55c-4cf6-bfe7-5e888e2bc7f0-kube-api-access-mh5xs\") pod \"keystone-operator-controller-manager-84f48565d4-8hvrl\" (UID: \"efcd65a1-b55c-4cf6-bfe7-5e888e2bc7f0\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8hvrl" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.423313 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6dxf\" (UniqueName: \"kubernetes.io/projected/fe5adffe-e198-4d4f-815d-02333b3a1853-kube-api-access-x6dxf\") pod \"manila-operator-controller-manager-7dd968899f-2z575\" (UID: \"fe5adffe-e198-4d4f-815d-02333b3a1853\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2z575" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.423353 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25rcz\" (UniqueName: \"kubernetes.io/projected/ea6ee14b-2acc-4894-8d63-57ad4a6a170a-kube-api-access-25rcz\") pod \"neutron-operator-controller-manager-585dbc889-wdrl7\" (UID: \"ea6ee14b-2acc-4894-8d63-57ad4a6a170a\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdrl7" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.424457 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s576v\" (UniqueName: \"kubernetes.io/projected/7f0021a0-f8df-42fa-8ef0-34653130a6e9-kube-api-access-s576v\") pod \"mariadb-operator-controller-manager-67bf948998-dvj6j\" (UID: \"7f0021a0-f8df-42fa-8ef0-34653130a6e9\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-dvj6j" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.463935 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-r2ljw" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.478898 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6dxf\" (UniqueName: \"kubernetes.io/projected/fe5adffe-e198-4d4f-815d-02333b3a1853-kube-api-access-x6dxf\") pod \"manila-operator-controller-manager-7dd968899f-2z575\" (UID: \"fe5adffe-e198-4d4f-815d-02333b3a1853\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2z575" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.497064 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-k7f4f"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.497930 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-k7f4f" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.500308 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-hxbwk" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.521925 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-4snkb"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.526167 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsbj5\" (UniqueName: \"kubernetes.io/projected/8d904b59-3b07-422e-a83b-a02ac443d6eb-kube-api-access-jsbj5\") pod \"nova-operator-controller-manager-55bff696bd-4snkb\" (UID: \"8d904b59-3b07-422e-a83b-a02ac443d6eb\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4snkb" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.526233 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25rcz\" (UniqueName: \"kubernetes.io/projected/ea6ee14b-2acc-4894-8d63-57ad4a6a170a-kube-api-access-25rcz\") pod \"neutron-operator-controller-manager-585dbc889-wdrl7\" (UID: \"ea6ee14b-2acc-4894-8d63-57ad4a6a170a\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdrl7" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.526315 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s576v\" (UniqueName: \"kubernetes.io/projected/7f0021a0-f8df-42fa-8ef0-34653130a6e9-kube-api-access-s576v\") pod \"mariadb-operator-controller-manager-67bf948998-dvj6j\" (UID: \"7f0021a0-f8df-42fa-8ef0-34653130a6e9\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-dvj6j" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.531743 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8hvrl" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.538100 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-k7f4f"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.554579 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-782zz"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.557190 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2z575" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.558480 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-782zz" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.565387 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25rcz\" (UniqueName: \"kubernetes.io/projected/ea6ee14b-2acc-4894-8d63-57ad4a6a170a-kube-api-access-25rcz\") pod \"neutron-operator-controller-manager-585dbc889-wdrl7\" (UID: \"ea6ee14b-2acc-4894-8d63-57ad4a6a170a\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdrl7" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.594563 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s576v\" (UniqueName: \"kubernetes.io/projected/7f0021a0-f8df-42fa-8ef0-34653130a6e9-kube-api-access-s576v\") pod \"mariadb-operator-controller-manager-67bf948998-dvj6j\" (UID: \"7f0021a0-f8df-42fa-8ef0-34653130a6e9\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-dvj6j" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.601481 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-dvj6j" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.602334 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-nswqf" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.625410 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.626823 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.628398 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6dhf\" (UniqueName: \"kubernetes.io/projected/b3c58b9c-4561-49ae-a23c-a77a34b8cfb5-kube-api-access-p6dhf\") pod \"octavia-operator-controller-manager-6687f8d877-k7f4f\" (UID: \"b3c58b9c-4561-49ae-a23c-a77a34b8cfb5\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-k7f4f" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.628479 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glsst\" (UniqueName: \"kubernetes.io/projected/fb454f09-c6b8-41f4-b69f-3125e8d4d79f-kube-api-access-glsst\") pod \"ovn-operator-controller-manager-788c46999f-782zz\" (UID: \"fb454f09-c6b8-41f4-b69f-3125e8d4d79f\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-782zz" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.628522 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsbj5\" (UniqueName: \"kubernetes.io/projected/8d904b59-3b07-422e-a83b-a02ac443d6eb-kube-api-access-jsbj5\") pod \"nova-operator-controller-manager-55bff696bd-4snkb\" (UID: \"8d904b59-3b07-422e-a83b-a02ac443d6eb\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4snkb" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.631169 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.631854 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-84lw6" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.641765 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdrl7" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.705063 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsbj5\" (UniqueName: \"kubernetes.io/projected/8d904b59-3b07-422e-a83b-a02ac443d6eb-kube-api-access-jsbj5\") pod \"nova-operator-controller-manager-55bff696bd-4snkb\" (UID: \"8d904b59-3b07-422e-a83b-a02ac443d6eb\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4snkb" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.707620 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-782zz"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.725602 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-9gs2r"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.729507 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.729559 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff81629a-d048-4c5d-b3a4-b892310ceff7-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf\" (UID: \"ff81629a-d048-4c5d-b3a4-b892310ceff7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.729667 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9gs2r" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.730204 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glsst\" (UniqueName: \"kubernetes.io/projected/fb454f09-c6b8-41f4-b69f-3125e8d4d79f-kube-api-access-glsst\") pod \"ovn-operator-controller-manager-788c46999f-782zz\" (UID: \"fb454f09-c6b8-41f4-b69f-3125e8d4d79f\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-782zz" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.730254 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blvfz\" (UniqueName: \"kubernetes.io/projected/ff81629a-d048-4c5d-b3a4-b892310ceff7-kube-api-access-blvfz\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf\" (UID: \"ff81629a-d048-4c5d-b3a4-b892310ceff7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.730460 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dhf\" (UniqueName: \"kubernetes.io/projected/b3c58b9c-4561-49ae-a23c-a77a34b8cfb5-kube-api-access-p6dhf\") pod \"octavia-operator-controller-manager-6687f8d877-k7f4f\" (UID: \"b3c58b9c-4561-49ae-a23c-a77a34b8cfb5\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-k7f4f" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.732321 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-mbv9q" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.755448 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-9gs2r"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.772758 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glsst\" (UniqueName: \"kubernetes.io/projected/fb454f09-c6b8-41f4-b69f-3125e8d4d79f-kube-api-access-glsst\") pod \"ovn-operator-controller-manager-788c46999f-782zz\" (UID: \"fb454f09-c6b8-41f4-b69f-3125e8d4d79f\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-782zz" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.773612 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6dhf\" (UniqueName: \"kubernetes.io/projected/b3c58b9c-4561-49ae-a23c-a77a34b8cfb5-kube-api-access-p6dhf\") pod \"octavia-operator-controller-manager-6687f8d877-k7f4f\" (UID: \"b3c58b9c-4561-49ae-a23c-a77a34b8cfb5\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-k7f4f" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.788422 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-6jhd8"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.789493 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-6jhd8" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.800265 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-z4nzx" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.816422 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-plj6q"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.818075 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-plj6q" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.821505 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-ssx4z" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.832399 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-6jhd8"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.833152 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqprj\" (UniqueName: \"kubernetes.io/projected/ddb4ccbd-d7ed-4c26-97c4-22ce6c38b431-kube-api-access-cqprj\") pod \"swift-operator-controller-manager-68fc8c869-6jhd8\" (UID: \"ddb4ccbd-d7ed-4c26-97c4-22ce6c38b431\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-6jhd8" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.833198 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blvfz\" (UniqueName: \"kubernetes.io/projected/ff81629a-d048-4c5d-b3a4-b892310ceff7-kube-api-access-blvfz\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf\" (UID: \"ff81629a-d048-4c5d-b3a4-b892310ceff7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.833265 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nmgb\" (UniqueName: \"kubernetes.io/projected/0af88c77-1c9c-4072-b0da-707bca0f4f12-kube-api-access-9nmgb\") pod \"placement-operator-controller-manager-5b964cf4cd-9gs2r\" (UID: \"0af88c77-1c9c-4072-b0da-707bca0f4f12\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9gs2r" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.833312 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00f00c32-1e04-42e4-95b4-923c6b57386e-cert\") pod \"infra-operator-controller-manager-79955696d6-gcs7k\" (UID: \"00f00c32-1e04-42e4-95b4-923c6b57386e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-gcs7k" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.833369 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff81629a-d048-4c5d-b3a4-b892310ceff7-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf\" (UID: \"ff81629a-d048-4c5d-b3a4-b892310ceff7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" Jan 31 04:01:50 crc kubenswrapper[4827]: E0131 04:01:50.833510 4827 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:01:50 crc kubenswrapper[4827]: E0131 04:01:50.833554 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff81629a-d048-4c5d-b3a4-b892310ceff7-cert podName:ff81629a-d048-4c5d-b3a4-b892310ceff7 nodeName:}" failed. No retries permitted until 2026-01-31 04:01:51.333539676 +0000 UTC m=+904.020620125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ff81629a-d048-4c5d-b3a4-b892310ceff7-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" (UID: "ff81629a-d048-4c5d-b3a4-b892310ceff7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:01:50 crc kubenswrapper[4827]: E0131 04:01:50.834291 4827 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 04:01:50 crc kubenswrapper[4827]: E0131 04:01:50.834322 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00f00c32-1e04-42e4-95b4-923c6b57386e-cert podName:00f00c32-1e04-42e4-95b4-923c6b57386e nodeName:}" failed. No retries permitted until 2026-01-31 04:01:51.83431243 +0000 UTC m=+904.521392879 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/00f00c32-1e04-42e4-95b4-923c6b57386e-cert") pod "infra-operator-controller-manager-79955696d6-gcs7k" (UID: "00f00c32-1e04-42e4-95b4-923c6b57386e") : secret "infra-operator-webhook-server-cert" not found Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.865114 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blvfz\" (UniqueName: \"kubernetes.io/projected/ff81629a-d048-4c5d-b3a4-b892310ceff7-kube-api-access-blvfz\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf\" (UID: \"ff81629a-d048-4c5d-b3a4-b892310ceff7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.866914 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-fr6qf"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.867841 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fr6qf" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.870137 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-kzrxw" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.904065 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-m97nw"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.904968 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-m97nw" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.910545 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-5hwg9" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.933331 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-plj6q"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.936864 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m5vt\" (UniqueName: \"kubernetes.io/projected/0d53929a-c249-47fa-9d02-98021a8bcf2a-kube-api-access-8m5vt\") pod \"test-operator-controller-manager-56f8bfcd9f-fr6qf\" (UID: \"0d53929a-c249-47fa-9d02-98021a8bcf2a\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fr6qf" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.936950 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nmgb\" (UniqueName: \"kubernetes.io/projected/0af88c77-1c9c-4072-b0da-707bca0f4f12-kube-api-access-9nmgb\") pod \"placement-operator-controller-manager-5b964cf4cd-9gs2r\" (UID: \"0af88c77-1c9c-4072-b0da-707bca0f4f12\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9gs2r" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.936976 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvwl7\" (UniqueName: \"kubernetes.io/projected/4d581cf6-c77f-4757-9091-cb1e23bfbcda-kube-api-access-cvwl7\") pod \"telemetry-operator-controller-manager-64b5b76f97-plj6q\" (UID: \"4d581cf6-c77f-4757-9091-cb1e23bfbcda\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-plj6q" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.937074 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqprj\" (UniqueName: \"kubernetes.io/projected/ddb4ccbd-d7ed-4c26-97c4-22ce6c38b431-kube-api-access-cqprj\") pod \"swift-operator-controller-manager-68fc8c869-6jhd8\" (UID: \"ddb4ccbd-d7ed-4c26-97c4-22ce6c38b431\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-6jhd8" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.937097 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prslc\" (UniqueName: \"kubernetes.io/projected/5666901d-66a6-4282-b44c-c39a0721faa2-kube-api-access-prslc\") pod \"watcher-operator-controller-manager-564965969-m97nw\" (UID: \"5666901d-66a6-4282-b44c-c39a0721faa2\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-m97nw" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.946260 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-fr6qf"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.957539 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4snkb" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.967208 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-m97nw"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.985279 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqprj\" (UniqueName: \"kubernetes.io/projected/ddb4ccbd-d7ed-4c26-97c4-22ce6c38b431-kube-api-access-cqprj\") pod \"swift-operator-controller-manager-68fc8c869-6jhd8\" (UID: \"ddb4ccbd-d7ed-4c26-97c4-22ce6c38b431\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-6jhd8" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.990463 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nmgb\" (UniqueName: \"kubernetes.io/projected/0af88c77-1c9c-4072-b0da-707bca0f4f12-kube-api-access-9nmgb\") pod \"placement-operator-controller-manager-5b964cf4cd-9gs2r\" (UID: \"0af88c77-1c9c-4072-b0da-707bca0f4f12\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9gs2r" Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.996038 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd"] Jan 31 04:01:50 crc kubenswrapper[4827]: I0131 04:01:50.997288 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.001358 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.001517 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.001648 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xtvjh" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.003663 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd"] Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.008468 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-k7f4f" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.028665 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zdjjp"] Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.030825 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zdjjp" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.037679 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-webhook-certs\") pod \"openstack-operator-controller-manager-794bbdbc56-fvlbd\" (UID: \"a7d7d7a5-296a-43d3-8c15-906a257549c2\") " pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.037726 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prslc\" (UniqueName: \"kubernetes.io/projected/5666901d-66a6-4282-b44c-c39a0721faa2-kube-api-access-prslc\") pod \"watcher-operator-controller-manager-564965969-m97nw\" (UID: \"5666901d-66a6-4282-b44c-c39a0721faa2\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-m97nw" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.037753 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m5vt\" (UniqueName: \"kubernetes.io/projected/0d53929a-c249-47fa-9d02-98021a8bcf2a-kube-api-access-8m5vt\") pod \"test-operator-controller-manager-56f8bfcd9f-fr6qf\" (UID: \"0d53929a-c249-47fa-9d02-98021a8bcf2a\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fr6qf" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.037776 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-metrics-certs\") pod \"openstack-operator-controller-manager-794bbdbc56-fvlbd\" (UID: \"a7d7d7a5-296a-43d3-8c15-906a257549c2\") " pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.037796 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvwl7\" (UniqueName: \"kubernetes.io/projected/4d581cf6-c77f-4757-9091-cb1e23bfbcda-kube-api-access-cvwl7\") pod \"telemetry-operator-controller-manager-64b5b76f97-plj6q\" (UID: \"4d581cf6-c77f-4757-9091-cb1e23bfbcda\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-plj6q" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.037856 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79sxg\" (UniqueName: \"kubernetes.io/projected/a7d7d7a5-296a-43d3-8c15-906a257549c2-kube-api-access-79sxg\") pod \"openstack-operator-controller-manager-794bbdbc56-fvlbd\" (UID: \"a7d7d7a5-296a-43d3-8c15-906a257549c2\") " pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.038424 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-99xdq" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.042002 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zdjjp"] Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.055300 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvwl7\" (UniqueName: \"kubernetes.io/projected/4d581cf6-c77f-4757-9091-cb1e23bfbcda-kube-api-access-cvwl7\") pod \"telemetry-operator-controller-manager-64b5b76f97-plj6q\" (UID: \"4d581cf6-c77f-4757-9091-cb1e23bfbcda\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-plj6q" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.067282 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prslc\" (UniqueName: \"kubernetes.io/projected/5666901d-66a6-4282-b44c-c39a0721faa2-kube-api-access-prslc\") pod \"watcher-operator-controller-manager-564965969-m97nw\" (UID: \"5666901d-66a6-4282-b44c-c39a0721faa2\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-m97nw" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.067874 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m5vt\" (UniqueName: \"kubernetes.io/projected/0d53929a-c249-47fa-9d02-98021a8bcf2a-kube-api-access-8m5vt\") pod \"test-operator-controller-manager-56f8bfcd9f-fr6qf\" (UID: \"0d53929a-c249-47fa-9d02-98021a8bcf2a\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fr6qf" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.069869 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-782zz" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.136744 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9gs2r" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.139107 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-webhook-certs\") pod \"openstack-operator-controller-manager-794bbdbc56-fvlbd\" (UID: \"a7d7d7a5-296a-43d3-8c15-906a257549c2\") " pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.139232 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-metrics-certs\") pod \"openstack-operator-controller-manager-794bbdbc56-fvlbd\" (UID: \"a7d7d7a5-296a-43d3-8c15-906a257549c2\") " pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:01:51 crc kubenswrapper[4827]: E0131 04:01:51.139344 4827 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 04:01:51 crc kubenswrapper[4827]: E0131 04:01:51.139407 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-webhook-certs podName:a7d7d7a5-296a-43d3-8c15-906a257549c2 nodeName:}" failed. No retries permitted until 2026-01-31 04:01:51.639386823 +0000 UTC m=+904.326467272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-webhook-certs") pod "openstack-operator-controller-manager-794bbdbc56-fvlbd" (UID: "a7d7d7a5-296a-43d3-8c15-906a257549c2") : secret "webhook-server-cert" not found Jan 31 04:01:51 crc kubenswrapper[4827]: E0131 04:01:51.139687 4827 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 04:01:51 crc kubenswrapper[4827]: E0131 04:01:51.139716 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-metrics-certs podName:a7d7d7a5-296a-43d3-8c15-906a257549c2 nodeName:}" failed. No retries permitted until 2026-01-31 04:01:51.639708173 +0000 UTC m=+904.326788622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-metrics-certs") pod "openstack-operator-controller-manager-794bbdbc56-fvlbd" (UID: "a7d7d7a5-296a-43d3-8c15-906a257549c2") : secret "metrics-server-cert" not found Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.140554 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4x6p\" (UniqueName: \"kubernetes.io/projected/0d85c53f-5192-4621-86cc-d9403773713b-kube-api-access-g4x6p\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zdjjp\" (UID: \"0d85c53f-5192-4621-86cc-d9403773713b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zdjjp" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.140656 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79sxg\" (UniqueName: \"kubernetes.io/projected/a7d7d7a5-296a-43d3-8c15-906a257549c2-kube-api-access-79sxg\") pod \"openstack-operator-controller-manager-794bbdbc56-fvlbd\" (UID: \"a7d7d7a5-296a-43d3-8c15-906a257549c2\") " pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.158993 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-6jhd8" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.170047 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79sxg\" (UniqueName: \"kubernetes.io/projected/a7d7d7a5-296a-43d3-8c15-906a257549c2-kube-api-access-79sxg\") pod \"openstack-operator-controller-manager-794bbdbc56-fvlbd\" (UID: \"a7d7d7a5-296a-43d3-8c15-906a257549c2\") " pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.183394 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-plj6q" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.230744 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fr6qf" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.242854 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4x6p\" (UniqueName: \"kubernetes.io/projected/0d85c53f-5192-4621-86cc-d9403773713b-kube-api-access-g4x6p\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zdjjp\" (UID: \"0d85c53f-5192-4621-86cc-d9403773713b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zdjjp" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.265858 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4x6p\" (UniqueName: \"kubernetes.io/projected/0d85c53f-5192-4621-86cc-d9403773713b-kube-api-access-g4x6p\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zdjjp\" (UID: \"0d85c53f-5192-4621-86cc-d9403773713b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zdjjp" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.313148 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zdjjp" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.343955 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff81629a-d048-4c5d-b3a4-b892310ceff7-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf\" (UID: \"ff81629a-d048-4c5d-b3a4-b892310ceff7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" Jan 31 04:01:51 crc kubenswrapper[4827]: E0131 04:01:51.344204 4827 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:01:51 crc kubenswrapper[4827]: E0131 04:01:51.344271 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff81629a-d048-4c5d-b3a4-b892310ceff7-cert podName:ff81629a-d048-4c5d-b3a4-b892310ceff7 nodeName:}" failed. No retries permitted until 2026-01-31 04:01:52.344252604 +0000 UTC m=+905.031333053 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ff81629a-d048-4c5d-b3a4-b892310ceff7-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" (UID: "ff81629a-d048-4c5d-b3a4-b892310ceff7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.356472 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-m97nw" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.466706 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-hprpc"] Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.476781 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-r2ljw"] Jan 31 04:01:51 crc kubenswrapper[4827]: W0131 04:01:51.478552 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe50fb01_1097_4ac9_81ae_fdfc96842f68.slice/crio-7b55320df3af3009f71798b619e793843e3659af4a2b81a1ae890c3c25460779 WatchSource:0}: Error finding container 7b55320df3af3009f71798b619e793843e3659af4a2b81a1ae890c3c25460779: Status 404 returned error can't find the container with id 7b55320df3af3009f71798b619e793843e3659af4a2b81a1ae890c3c25460779 Jan 31 04:01:51 crc kubenswrapper[4827]: W0131 04:01:51.497331 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadfd32af_9db4_468a_bac1_d33f11930922.slice/crio-4233843ac9c1d8f196447bf7ebee2bd38825e0fd0458214880c0599371694ea7 WatchSource:0}: Error finding container 4233843ac9c1d8f196447bf7ebee2bd38825e0fd0458214880c0599371694ea7: Status 404 returned error can't find the container with id 4233843ac9c1d8f196447bf7ebee2bd38825e0fd0458214880c0599371694ea7 Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.591154 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hprpc" event={"ID":"fe50fb01-1097-4ac9-81ae-fdfc96842f68","Type":"ContainerStarted","Data":"7b55320df3af3009f71798b619e793843e3659af4a2b81a1ae890c3c25460779"} Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.592139 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-r2ljw" event={"ID":"adfd32af-9db4-468a-bac1-d33f11930922","Type":"ContainerStarted","Data":"4233843ac9c1d8f196447bf7ebee2bd38825e0fd0458214880c0599371694ea7"} Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.664820 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-webhook-certs\") pod \"openstack-operator-controller-manager-794bbdbc56-fvlbd\" (UID: \"a7d7d7a5-296a-43d3-8c15-906a257549c2\") " pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.664873 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-metrics-certs\") pod \"openstack-operator-controller-manager-794bbdbc56-fvlbd\" (UID: \"a7d7d7a5-296a-43d3-8c15-906a257549c2\") " pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:01:51 crc kubenswrapper[4827]: E0131 04:01:51.665071 4827 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 04:01:51 crc kubenswrapper[4827]: E0131 04:01:51.665116 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-metrics-certs podName:a7d7d7a5-296a-43d3-8c15-906a257549c2 nodeName:}" failed. No retries permitted until 2026-01-31 04:01:52.665101803 +0000 UTC m=+905.352182252 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-metrics-certs") pod "openstack-operator-controller-manager-794bbdbc56-fvlbd" (UID: "a7d7d7a5-296a-43d3-8c15-906a257549c2") : secret "metrics-server-cert" not found Jan 31 04:01:51 crc kubenswrapper[4827]: E0131 04:01:51.665428 4827 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 04:01:51 crc kubenswrapper[4827]: E0131 04:01:51.665457 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-webhook-certs podName:a7d7d7a5-296a-43d3-8c15-906a257549c2 nodeName:}" failed. No retries permitted until 2026-01-31 04:01:52.665448914 +0000 UTC m=+905.352529363 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-webhook-certs") pod "openstack-operator-controller-manager-794bbdbc56-fvlbd" (UID: "a7d7d7a5-296a-43d3-8c15-906a257549c2") : secret "webhook-server-cert" not found Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.850968 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-wdrl7"] Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.867134 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00f00c32-1e04-42e4-95b4-923c6b57386e-cert\") pod \"infra-operator-controller-manager-79955696d6-gcs7k\" (UID: \"00f00c32-1e04-42e4-95b4-923c6b57386e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-gcs7k" Jan 31 04:01:51 crc kubenswrapper[4827]: E0131 04:01:51.867397 4827 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 04:01:51 crc kubenswrapper[4827]: E0131 04:01:51.867461 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00f00c32-1e04-42e4-95b4-923c6b57386e-cert podName:00f00c32-1e04-42e4-95b4-923c6b57386e nodeName:}" failed. No retries permitted until 2026-01-31 04:01:53.867442737 +0000 UTC m=+906.554523186 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/00f00c32-1e04-42e4-95b4-923c6b57386e-cert") pod "infra-operator-controller-manager-79955696d6-gcs7k" (UID: "00f00c32-1e04-42e4-95b4-923c6b57386e") : secret "infra-operator-webhook-server-cert" not found Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.878068 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-8hvrl"] Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.892309 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k469j"] Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.902103 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-wwvbx"] Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.908443 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-9k4dq"] Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.928245 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-2z575"] Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.945436 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-9gs2r"] Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.951387 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-782zz"] Jan 31 04:01:51 crc kubenswrapper[4827]: W0131 04:01:51.953380 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb454f09_c6b8_41f4_b69f_3125e8d4d79f.slice/crio-c08a8b3d9c1f21c0044fc876da673563d7ae368091deee60c0d7de3d245ffb0d WatchSource:0}: Error finding container c08a8b3d9c1f21c0044fc876da673563d7ae368091deee60c0d7de3d245ffb0d: Status 404 returned error can't find the container with id c08a8b3d9c1f21c0044fc876da673563d7ae368091deee60c0d7de3d245ffb0d Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.971687 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-4snkb"] Jan 31 04:01:51 crc kubenswrapper[4827]: I0131 04:01:51.977453 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7489d7c99b-75s7f"] Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.134959 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-dvj6j"] Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.142038 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-k7f4f"] Jan 31 04:01:52 crc kubenswrapper[4827]: W0131 04:01:52.144322 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ee58492_27e7_446f_84c8_c3b0b74884fa.slice/crio-4ffb4892df4d2135ffbd67734a30fa8e64f07d50f69d568d0b947e7a03c1797b WatchSource:0}: Error finding container 4ffb4892df4d2135ffbd67734a30fa8e64f07d50f69d568d0b947e7a03c1797b: Status 404 returned error can't find the container with id 4ffb4892df4d2135ffbd67734a30fa8e64f07d50f69d568d0b947e7a03c1797b Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.161409 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-zdtlh"] Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.181927 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-plj6q"] Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.197048 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-fr6qf"] Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.203582 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x2b2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-8886f4c47-zdtlh_openstack-operators(c0c17a5a-5f0d-421e-b29c-56c4f2626a7b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.205535 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-zdtlh" podUID="c0c17a5a-5f0d-421e-b29c-56c4f2626a7b" Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.206547 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-6jhd8"] Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.213653 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-m97nw"] Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.219260 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zdjjp"] Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.219700 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p6dhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-k7f4f_openstack-operators(b3c58b9c-4561-49ae-a23c-a77a34b8cfb5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.219764 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cvwl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-plj6q_openstack-operators(4d581cf6-c77f-4757-9091-cb1e23bfbcda): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 04:01:52 crc kubenswrapper[4827]: W0131 04:01:52.220834 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d53929a_c249_47fa_9d02_98021a8bcf2a.slice/crio-27d1a8502ac48cf4d071cdd38805e03b9f58f2c74dd58ef7bb2986402c89cf29 WatchSource:0}: Error finding container 27d1a8502ac48cf4d071cdd38805e03b9f58f2c74dd58ef7bb2986402c89cf29: Status 404 returned error can't find the container with id 27d1a8502ac48cf4d071cdd38805e03b9f58f2c74dd58ef7bb2986402c89cf29 Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.220904 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-plj6q" podUID="4d581cf6-c77f-4757-9091-cb1e23bfbcda" Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.220956 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-k7f4f" podUID="b3c58b9c-4561-49ae-a23c-a77a34b8cfb5" Jan 31 04:01:52 crc kubenswrapper[4827]: W0131 04:01:52.221507 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d85c53f_5192_4621_86cc_d9403773713b.slice/crio-17f947b46ec23947284d6dd71c0a7b9a28fa6de572fa624285dac695dd33ffaf WatchSource:0}: Error finding container 17f947b46ec23947284d6dd71c0a7b9a28fa6de572fa624285dac695dd33ffaf: Status 404 returned error can't find the container with id 17f947b46ec23947284d6dd71c0a7b9a28fa6de572fa624285dac695dd33ffaf Jan 31 04:01:52 crc kubenswrapper[4827]: W0131 04:01:52.229690 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddb4ccbd_d7ed_4c26_97c4_22ce6c38b431.slice/crio-db1f206ba34a8946d2d5048d87a6caeda9a05b129a51557afd50989ecfd88d7e WatchSource:0}: Error finding container db1f206ba34a8946d2d5048d87a6caeda9a05b129a51557afd50989ecfd88d7e: Status 404 returned error can't find the container with id db1f206ba34a8946d2d5048d87a6caeda9a05b129a51557afd50989ecfd88d7e Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.244049 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g4x6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-zdjjp_openstack-operators(0d85c53f-5192-4621-86cc-d9403773713b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.245287 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zdjjp" podUID="0d85c53f-5192-4621-86cc-d9403773713b" Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.248871 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-prslc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-m97nw_openstack-operators(5666901d-66a6-4282-b44c-c39a0721faa2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.249442 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cqprj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-6jhd8_openstack-operators(ddb4ccbd-d7ed-4c26-97c4-22ce6c38b431): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.250419 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-m97nw" podUID="5666901d-66a6-4282-b44c-c39a0721faa2" Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.251080 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-6jhd8" podUID="ddb4ccbd-d7ed-4c26-97c4-22ce6c38b431" Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.274261 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8m5vt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-fr6qf_openstack-operators(0d53929a-c249-47fa-9d02-98021a8bcf2a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.275528 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fr6qf" podUID="0d53929a-c249-47fa-9d02-98021a8bcf2a" Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.384682 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff81629a-d048-4c5d-b3a4-b892310ceff7-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf\" (UID: \"ff81629a-d048-4c5d-b3a4-b892310ceff7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.384868 4827 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.385001 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff81629a-d048-4c5d-b3a4-b892310ceff7-cert podName:ff81629a-d048-4c5d-b3a4-b892310ceff7 nodeName:}" failed. No retries permitted until 2026-01-31 04:01:54.384961174 +0000 UTC m=+907.072041623 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ff81629a-d048-4c5d-b3a4-b892310ceff7-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" (UID: "ff81629a-d048-4c5d-b3a4-b892310ceff7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.605748 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-782zz" event={"ID":"fb454f09-c6b8-41f4-b69f-3125e8d4d79f","Type":"ContainerStarted","Data":"c08a8b3d9c1f21c0044fc876da673563d7ae368091deee60c0d7de3d245ffb0d"} Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.626143 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-dvj6j" event={"ID":"7f0021a0-f8df-42fa-8ef0-34653130a6e9","Type":"ContainerStarted","Data":"15b96a1fc7d3b68582d60710eb559374b8c40ec48f1e416f4342efac3b62308d"} Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.627563 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-k7f4f" event={"ID":"b3c58b9c-4561-49ae-a23c-a77a34b8cfb5","Type":"ContainerStarted","Data":"62c114a854cc9e07f3dd7086330a7de54119780b5c610c70e5ccae20d72814ec"} Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.628795 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-k7f4f" podUID="b3c58b9c-4561-49ae-a23c-a77a34b8cfb5" Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.629831 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fr6qf" event={"ID":"0d53929a-c249-47fa-9d02-98021a8bcf2a","Type":"ContainerStarted","Data":"27d1a8502ac48cf4d071cdd38805e03b9f58f2c74dd58ef7bb2986402c89cf29"} Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.634084 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k469j" event={"ID":"1ee58492-27e7-446f-84c8-c3b0b74884fa","Type":"ContainerStarted","Data":"4ffb4892df4d2135ffbd67734a30fa8e64f07d50f69d568d0b947e7a03c1797b"} Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.634105 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fr6qf" podUID="0d53929a-c249-47fa-9d02-98021a8bcf2a" Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.635217 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2z575" event={"ID":"fe5adffe-e198-4d4f-815d-02333b3a1853","Type":"ContainerStarted","Data":"938feceaccdc1a66b8cfcbc70f7c512bef6ec83cf9936603fe25192aadb1444a"} Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.636085 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-6jhd8" event={"ID":"ddb4ccbd-d7ed-4c26-97c4-22ce6c38b431","Type":"ContainerStarted","Data":"db1f206ba34a8946d2d5048d87a6caeda9a05b129a51557afd50989ecfd88d7e"} Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.637343 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-6jhd8" podUID="ddb4ccbd-d7ed-4c26-97c4-22ce6c38b431" Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.638002 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7489d7c99b-75s7f" event={"ID":"74e68a52-8f24-4ff0-a160-8a1ad61238c9","Type":"ContainerStarted","Data":"a8009a0ac85fcd812542da6a41554001aed151999bc5722fb482ed50a16bc28e"} Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.641187 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wwvbx" event={"ID":"bbf882c7-842b-46eb-a459-bb628db2598f","Type":"ContainerStarted","Data":"0595f2dd1a2248ac15b5f0db142fd8d676b8998baa71e4375d784e9dd69e7137"} Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.642497 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-m97nw" event={"ID":"5666901d-66a6-4282-b44c-c39a0721faa2","Type":"ContainerStarted","Data":"930abaecb5a63dbff3506f8a1a1ac21b8ffa7b3893d1c6f4c7036f461968fd96"} Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.646995 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-m97nw" podUID="5666901d-66a6-4282-b44c-c39a0721faa2" Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.649641 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9gs2r" event={"ID":"0af88c77-1c9c-4072-b0da-707bca0f4f12","Type":"ContainerStarted","Data":"c424eaf6499e4591abcc1526767c41ee743e33f5ff2327a98c3ec439e4f62b77"} Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.654791 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4snkb" event={"ID":"8d904b59-3b07-422e-a83b-a02ac443d6eb","Type":"ContainerStarted","Data":"2a3999f084807646b298b902f6dc84298af725dcde4944f1f5800dc6eb1aa6f0"} Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.656368 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-zdtlh" event={"ID":"c0c17a5a-5f0d-421e-b29c-56c4f2626a7b","Type":"ContainerStarted","Data":"a458d214e40dd00051b22f17ddc2322138752b0d51f8560a392563443e882e44"} Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.658040 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdrl7" event={"ID":"ea6ee14b-2acc-4894-8d63-57ad4a6a170a","Type":"ContainerStarted","Data":"43aef8c97672d350d86b69cba27a83f8cb23dc70c9d0c872d04da5e614e9a102"} Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.658047 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4\\\"\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-zdtlh" podUID="c0c17a5a-5f0d-421e-b29c-56c4f2626a7b" Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.667129 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-9k4dq" event={"ID":"60792734-916b-4bb7-a17f-45a03be036c8","Type":"ContainerStarted","Data":"e311643e2f13e1f2f331acd16a1bddaf2704c4d24a74d588a130c8aa95266c68"} Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.676041 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zdjjp" event={"ID":"0d85c53f-5192-4621-86cc-d9403773713b","Type":"ContainerStarted","Data":"17f947b46ec23947284d6dd71c0a7b9a28fa6de572fa624285dac695dd33ffaf"} Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.695113 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-webhook-certs\") pod \"openstack-operator-controller-manager-794bbdbc56-fvlbd\" (UID: \"a7d7d7a5-296a-43d3-8c15-906a257549c2\") " pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.695197 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-metrics-certs\") pod \"openstack-operator-controller-manager-794bbdbc56-fvlbd\" (UID: \"a7d7d7a5-296a-43d3-8c15-906a257549c2\") " pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.695314 4827 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.695354 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-metrics-certs podName:a7d7d7a5-296a-43d3-8c15-906a257549c2 nodeName:}" failed. No retries permitted until 2026-01-31 04:01:54.695341661 +0000 UTC m=+907.382422110 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-metrics-certs") pod "openstack-operator-controller-manager-794bbdbc56-fvlbd" (UID: "a7d7d7a5-296a-43d3-8c15-906a257549c2") : secret "metrics-server-cert" not found Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.695393 4827 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.695413 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-webhook-certs podName:a7d7d7a5-296a-43d3-8c15-906a257549c2 nodeName:}" failed. No retries permitted until 2026-01-31 04:01:54.695407373 +0000 UTC m=+907.382487822 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-webhook-certs") pod "openstack-operator-controller-manager-794bbdbc56-fvlbd" (UID: "a7d7d7a5-296a-43d3-8c15-906a257549c2") : secret "webhook-server-cert" not found Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.695417 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8hvrl" event={"ID":"efcd65a1-b55c-4cf6-bfe7-5e888e2bc7f0","Type":"ContainerStarted","Data":"e49019f8657e7541e4b13776312b866496031cd44da28afce9a496d13b436e8c"} Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.696695 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zdjjp" podUID="0d85c53f-5192-4621-86cc-d9403773713b" Jan 31 04:01:52 crc kubenswrapper[4827]: I0131 04:01:52.698327 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-plj6q" event={"ID":"4d581cf6-c77f-4757-9091-cb1e23bfbcda","Type":"ContainerStarted","Data":"f0fcfc623323143369f8fa06aaeb4f49226d99c28d9b9ccbe02052e2e4a25101"} Jan 31 04:01:52 crc kubenswrapper[4827]: E0131 04:01:52.701871 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-plj6q" podUID="4d581cf6-c77f-4757-9091-cb1e23bfbcda" Jan 31 04:01:53 crc kubenswrapper[4827]: E0131 04:01:53.744190 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-m97nw" podUID="5666901d-66a6-4282-b44c-c39a0721faa2" Jan 31 04:01:53 crc kubenswrapper[4827]: E0131 04:01:53.744817 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fr6qf" podUID="0d53929a-c249-47fa-9d02-98021a8bcf2a" Jan 31 04:01:53 crc kubenswrapper[4827]: E0131 04:01:53.744868 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4\\\"\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-zdtlh" podUID="c0c17a5a-5f0d-421e-b29c-56c4f2626a7b" Jan 31 04:01:53 crc kubenswrapper[4827]: E0131 04:01:53.744920 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-6jhd8" podUID="ddb4ccbd-d7ed-4c26-97c4-22ce6c38b431" Jan 31 04:01:53 crc kubenswrapper[4827]: E0131 04:01:53.744963 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-plj6q" podUID="4d581cf6-c77f-4757-9091-cb1e23bfbcda" Jan 31 04:01:53 crc kubenswrapper[4827]: E0131 04:01:53.751109 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-k7f4f" podUID="b3c58b9c-4561-49ae-a23c-a77a34b8cfb5" Jan 31 04:01:53 crc kubenswrapper[4827]: E0131 04:01:53.753842 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zdjjp" podUID="0d85c53f-5192-4621-86cc-d9403773713b" Jan 31 04:01:53 crc kubenswrapper[4827]: I0131 04:01:53.924650 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00f00c32-1e04-42e4-95b4-923c6b57386e-cert\") pod \"infra-operator-controller-manager-79955696d6-gcs7k\" (UID: \"00f00c32-1e04-42e4-95b4-923c6b57386e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-gcs7k" Jan 31 04:01:53 crc kubenswrapper[4827]: E0131 04:01:53.924862 4827 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 04:01:53 crc kubenswrapper[4827]: E0131 04:01:53.924945 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00f00c32-1e04-42e4-95b4-923c6b57386e-cert podName:00f00c32-1e04-42e4-95b4-923c6b57386e nodeName:}" failed. No retries permitted until 2026-01-31 04:01:57.924933419 +0000 UTC m=+910.612013868 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/00f00c32-1e04-42e4-95b4-923c6b57386e-cert") pod "infra-operator-controller-manager-79955696d6-gcs7k" (UID: "00f00c32-1e04-42e4-95b4-923c6b57386e") : secret "infra-operator-webhook-server-cert" not found Jan 31 04:01:54 crc kubenswrapper[4827]: I0131 04:01:54.437103 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff81629a-d048-4c5d-b3a4-b892310ceff7-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf\" (UID: \"ff81629a-d048-4c5d-b3a4-b892310ceff7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" Jan 31 04:01:54 crc kubenswrapper[4827]: E0131 04:01:54.437379 4827 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:01:54 crc kubenswrapper[4827]: E0131 04:01:54.437515 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff81629a-d048-4c5d-b3a4-b892310ceff7-cert podName:ff81629a-d048-4c5d-b3a4-b892310ceff7 nodeName:}" failed. No retries permitted until 2026-01-31 04:01:58.437494694 +0000 UTC m=+911.124575143 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ff81629a-d048-4c5d-b3a4-b892310ceff7-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" (UID: "ff81629a-d048-4c5d-b3a4-b892310ceff7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:01:54 crc kubenswrapper[4827]: I0131 04:01:54.744542 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-webhook-certs\") pod \"openstack-operator-controller-manager-794bbdbc56-fvlbd\" (UID: \"a7d7d7a5-296a-43d3-8c15-906a257549c2\") " pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:01:54 crc kubenswrapper[4827]: I0131 04:01:54.744616 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-metrics-certs\") pod \"openstack-operator-controller-manager-794bbdbc56-fvlbd\" (UID: \"a7d7d7a5-296a-43d3-8c15-906a257549c2\") " pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:01:54 crc kubenswrapper[4827]: E0131 04:01:54.744810 4827 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 04:01:54 crc kubenswrapper[4827]: E0131 04:01:54.744842 4827 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 04:01:54 crc kubenswrapper[4827]: E0131 04:01:54.744894 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-webhook-certs podName:a7d7d7a5-296a-43d3-8c15-906a257549c2 nodeName:}" failed. No retries permitted until 2026-01-31 04:01:58.744863297 +0000 UTC m=+911.431943746 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-webhook-certs") pod "openstack-operator-controller-manager-794bbdbc56-fvlbd" (UID: "a7d7d7a5-296a-43d3-8c15-906a257549c2") : secret "webhook-server-cert" not found Jan 31 04:01:54 crc kubenswrapper[4827]: E0131 04:01:54.745137 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-metrics-certs podName:a7d7d7a5-296a-43d3-8c15-906a257549c2 nodeName:}" failed. No retries permitted until 2026-01-31 04:01:58.745104335 +0000 UTC m=+911.432184824 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-metrics-certs") pod "openstack-operator-controller-manager-794bbdbc56-fvlbd" (UID: "a7d7d7a5-296a-43d3-8c15-906a257549c2") : secret "metrics-server-cert" not found Jan 31 04:01:58 crc kubenswrapper[4827]: I0131 04:01:58.008375 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00f00c32-1e04-42e4-95b4-923c6b57386e-cert\") pod \"infra-operator-controller-manager-79955696d6-gcs7k\" (UID: \"00f00c32-1e04-42e4-95b4-923c6b57386e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-gcs7k" Jan 31 04:01:58 crc kubenswrapper[4827]: E0131 04:01:58.008538 4827 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 04:01:58 crc kubenswrapper[4827]: E0131 04:01:58.008597 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00f00c32-1e04-42e4-95b4-923c6b57386e-cert podName:00f00c32-1e04-42e4-95b4-923c6b57386e nodeName:}" failed. No retries permitted until 2026-01-31 04:02:06.008582576 +0000 UTC m=+918.695663025 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/00f00c32-1e04-42e4-95b4-923c6b57386e-cert") pod "infra-operator-controller-manager-79955696d6-gcs7k" (UID: "00f00c32-1e04-42e4-95b4-923c6b57386e") : secret "infra-operator-webhook-server-cert" not found Jan 31 04:01:58 crc kubenswrapper[4827]: I0131 04:01:58.514650 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff81629a-d048-4c5d-b3a4-b892310ceff7-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf\" (UID: \"ff81629a-d048-4c5d-b3a4-b892310ceff7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" Jan 31 04:01:58 crc kubenswrapper[4827]: E0131 04:01:58.514833 4827 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:01:58 crc kubenswrapper[4827]: E0131 04:01:58.515157 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff81629a-d048-4c5d-b3a4-b892310ceff7-cert podName:ff81629a-d048-4c5d-b3a4-b892310ceff7 nodeName:}" failed. No retries permitted until 2026-01-31 04:02:06.515135543 +0000 UTC m=+919.202215992 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ff81629a-d048-4c5d-b3a4-b892310ceff7-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" (UID: "ff81629a-d048-4c5d-b3a4-b892310ceff7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:01:58 crc kubenswrapper[4827]: I0131 04:01:58.820540 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-webhook-certs\") pod \"openstack-operator-controller-manager-794bbdbc56-fvlbd\" (UID: \"a7d7d7a5-296a-43d3-8c15-906a257549c2\") " pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:01:58 crc kubenswrapper[4827]: I0131 04:01:58.820621 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-metrics-certs\") pod \"openstack-operator-controller-manager-794bbdbc56-fvlbd\" (UID: \"a7d7d7a5-296a-43d3-8c15-906a257549c2\") " pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:01:58 crc kubenswrapper[4827]: E0131 04:01:58.820659 4827 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 04:01:58 crc kubenswrapper[4827]: E0131 04:01:58.820714 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-webhook-certs podName:a7d7d7a5-296a-43d3-8c15-906a257549c2 nodeName:}" failed. No retries permitted until 2026-01-31 04:02:06.820697667 +0000 UTC m=+919.507778116 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-webhook-certs") pod "openstack-operator-controller-manager-794bbdbc56-fvlbd" (UID: "a7d7d7a5-296a-43d3-8c15-906a257549c2") : secret "webhook-server-cert" not found Jan 31 04:01:58 crc kubenswrapper[4827]: E0131 04:01:58.820789 4827 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 04:01:58 crc kubenswrapper[4827]: E0131 04:01:58.820822 4827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-metrics-certs podName:a7d7d7a5-296a-43d3-8c15-906a257549c2 nodeName:}" failed. No retries permitted until 2026-01-31 04:02:06.82081209 +0000 UTC m=+919.507892539 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-metrics-certs") pod "openstack-operator-controller-manager-794bbdbc56-fvlbd" (UID: "a7d7d7a5-296a-43d3-8c15-906a257549c2") : secret "metrics-server-cert" not found Jan 31 04:02:05 crc kubenswrapper[4827]: I0131 04:02:05.493931 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cs46v"] Jan 31 04:02:05 crc kubenswrapper[4827]: I0131 04:02:05.496287 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cs46v" Jan 31 04:02:05 crc kubenswrapper[4827]: I0131 04:02:05.522726 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cs46v"] Jan 31 04:02:05 crc kubenswrapper[4827]: I0131 04:02:05.656746 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c61dfb4-4c69-4257-b957-09fde25ca424-utilities\") pod \"redhat-marketplace-cs46v\" (UID: \"0c61dfb4-4c69-4257-b957-09fde25ca424\") " pod="openshift-marketplace/redhat-marketplace-cs46v" Jan 31 04:02:05 crc kubenswrapper[4827]: I0131 04:02:05.656841 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdvnd\" (UniqueName: \"kubernetes.io/projected/0c61dfb4-4c69-4257-b957-09fde25ca424-kube-api-access-kdvnd\") pod \"redhat-marketplace-cs46v\" (UID: \"0c61dfb4-4c69-4257-b957-09fde25ca424\") " pod="openshift-marketplace/redhat-marketplace-cs46v" Jan 31 04:02:05 crc kubenswrapper[4827]: I0131 04:02:05.656925 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c61dfb4-4c69-4257-b957-09fde25ca424-catalog-content\") pod \"redhat-marketplace-cs46v\" (UID: \"0c61dfb4-4c69-4257-b957-09fde25ca424\") " pod="openshift-marketplace/redhat-marketplace-cs46v" Jan 31 04:02:05 crc kubenswrapper[4827]: I0131 04:02:05.759048 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c61dfb4-4c69-4257-b957-09fde25ca424-utilities\") pod \"redhat-marketplace-cs46v\" (UID: \"0c61dfb4-4c69-4257-b957-09fde25ca424\") " pod="openshift-marketplace/redhat-marketplace-cs46v" Jan 31 04:02:05 crc kubenswrapper[4827]: I0131 04:02:05.759195 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdvnd\" (UniqueName: \"kubernetes.io/projected/0c61dfb4-4c69-4257-b957-09fde25ca424-kube-api-access-kdvnd\") pod \"redhat-marketplace-cs46v\" (UID: \"0c61dfb4-4c69-4257-b957-09fde25ca424\") " pod="openshift-marketplace/redhat-marketplace-cs46v" Jan 31 04:02:05 crc kubenswrapper[4827]: I0131 04:02:05.759307 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c61dfb4-4c69-4257-b957-09fde25ca424-catalog-content\") pod \"redhat-marketplace-cs46v\" (UID: \"0c61dfb4-4c69-4257-b957-09fde25ca424\") " pod="openshift-marketplace/redhat-marketplace-cs46v" Jan 31 04:02:05 crc kubenswrapper[4827]: I0131 04:02:05.760323 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c61dfb4-4c69-4257-b957-09fde25ca424-catalog-content\") pod \"redhat-marketplace-cs46v\" (UID: \"0c61dfb4-4c69-4257-b957-09fde25ca424\") " pod="openshift-marketplace/redhat-marketplace-cs46v" Jan 31 04:02:05 crc kubenswrapper[4827]: I0131 04:02:05.760370 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c61dfb4-4c69-4257-b957-09fde25ca424-utilities\") pod \"redhat-marketplace-cs46v\" (UID: \"0c61dfb4-4c69-4257-b957-09fde25ca424\") " pod="openshift-marketplace/redhat-marketplace-cs46v" Jan 31 04:02:05 crc kubenswrapper[4827]: I0131 04:02:05.784245 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdvnd\" (UniqueName: \"kubernetes.io/projected/0c61dfb4-4c69-4257-b957-09fde25ca424-kube-api-access-kdvnd\") pod \"redhat-marketplace-cs46v\" (UID: \"0c61dfb4-4c69-4257-b957-09fde25ca424\") " pod="openshift-marketplace/redhat-marketplace-cs46v" Jan 31 04:02:05 crc kubenswrapper[4827]: I0131 04:02:05.799913 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cs46v" Jan 31 04:02:06 crc kubenswrapper[4827]: I0131 04:02:06.063948 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00f00c32-1e04-42e4-95b4-923c6b57386e-cert\") pod \"infra-operator-controller-manager-79955696d6-gcs7k\" (UID: \"00f00c32-1e04-42e4-95b4-923c6b57386e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-gcs7k" Jan 31 04:02:06 crc kubenswrapper[4827]: I0131 04:02:06.070290 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00f00c32-1e04-42e4-95b4-923c6b57386e-cert\") pod \"infra-operator-controller-manager-79955696d6-gcs7k\" (UID: \"00f00c32-1e04-42e4-95b4-923c6b57386e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-gcs7k" Jan 31 04:02:06 crc kubenswrapper[4827]: I0131 04:02:06.311098 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cs46v"] Jan 31 04:02:06 crc kubenswrapper[4827]: I0131 04:02:06.334612 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-gcs7k" Jan 31 04:02:06 crc kubenswrapper[4827]: W0131 04:02:06.344194 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c61dfb4_4c69_4257_b957_09fde25ca424.slice/crio-29ae825bc8c1b1008848e9cef7af1c3bba2ab17d33f0d2c2e73dde9614e792ad WatchSource:0}: Error finding container 29ae825bc8c1b1008848e9cef7af1c3bba2ab17d33f0d2c2e73dde9614e792ad: Status 404 returned error can't find the container with id 29ae825bc8c1b1008848e9cef7af1c3bba2ab17d33f0d2c2e73dde9614e792ad Jan 31 04:02:06 crc kubenswrapper[4827]: I0131 04:02:06.572464 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff81629a-d048-4c5d-b3a4-b892310ceff7-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf\" (UID: \"ff81629a-d048-4c5d-b3a4-b892310ceff7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" Jan 31 04:02:06 crc kubenswrapper[4827]: I0131 04:02:06.598085 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff81629a-d048-4c5d-b3a4-b892310ceff7-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf\" (UID: \"ff81629a-d048-4c5d-b3a4-b892310ceff7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" Jan 31 04:02:06 crc kubenswrapper[4827]: I0131 04:02:06.700286 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" Jan 31 04:02:06 crc kubenswrapper[4827]: I0131 04:02:06.899387 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-webhook-certs\") pod \"openstack-operator-controller-manager-794bbdbc56-fvlbd\" (UID: \"a7d7d7a5-296a-43d3-8c15-906a257549c2\") " pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:02:06 crc kubenswrapper[4827]: I0131 04:02:06.899830 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-metrics-certs\") pod \"openstack-operator-controller-manager-794bbdbc56-fvlbd\" (UID: \"a7d7d7a5-296a-43d3-8c15-906a257549c2\") " pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:02:06 crc kubenswrapper[4827]: I0131 04:02:06.908768 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-webhook-certs\") pod \"openstack-operator-controller-manager-794bbdbc56-fvlbd\" (UID: \"a7d7d7a5-296a-43d3-8c15-906a257549c2\") " pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:02:06 crc kubenswrapper[4827]: I0131 04:02:06.912052 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7d7d7a5-296a-43d3-8c15-906a257549c2-metrics-certs\") pod \"openstack-operator-controller-manager-794bbdbc56-fvlbd\" (UID: \"a7d7d7a5-296a-43d3-8c15-906a257549c2\") " pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:02:06 crc kubenswrapper[4827]: I0131 04:02:06.941275 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cs46v" event={"ID":"0c61dfb4-4c69-4257-b957-09fde25ca424","Type":"ContainerStarted","Data":"bbf179a2224fa51aa2361ca2bbed1397d9305e9f20659ef9a9c897490127cd90"} Jan 31 04:02:06 crc kubenswrapper[4827]: I0131 04:02:06.941324 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cs46v" event={"ID":"0c61dfb4-4c69-4257-b957-09fde25ca424","Type":"ContainerStarted","Data":"29ae825bc8c1b1008848e9cef7af1c3bba2ab17d33f0d2c2e73dde9614e792ad"} Jan 31 04:02:06 crc kubenswrapper[4827]: I0131 04:02:06.943174 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-gcs7k"] Jan 31 04:02:06 crc kubenswrapper[4827]: I0131 04:02:06.967102 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2z575" event={"ID":"fe5adffe-e198-4d4f-815d-02333b3a1853","Type":"ContainerStarted","Data":"2429c7e112f9c8e6d328c7d44930aa298f5dc3b97466111bd9f352ae375a8e7b"} Jan 31 04:02:06 crc kubenswrapper[4827]: I0131 04:02:06.968082 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2z575" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.003818 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-9k4dq" event={"ID":"60792734-916b-4bb7-a17f-45a03be036c8","Type":"ContainerStarted","Data":"0b6f36ad0d8822abc535b07dcdf44c2117ffc83a04685e882e40f09ba82dfc77"} Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.004111 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-9k4dq" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.033940 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-r2ljw" event={"ID":"adfd32af-9db4-468a-bac1-d33f11930922","Type":"ContainerStarted","Data":"c5392d6c7309c63e3354cd6057223c454975c0268fcba764debb36f18ebd2206"} Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.034204 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-r2ljw" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.044666 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-9k4dq" podStartSLOduration=4.852474591 podStartE2EDuration="18.044650102s" podCreationTimestamp="2026-01-31 04:01:49 +0000 UTC" firstStartedPulling="2026-01-31 04:01:52.170235449 +0000 UTC m=+904.857315898" lastFinishedPulling="2026-01-31 04:02:05.36241096 +0000 UTC m=+918.049491409" observedRunningTime="2026-01-31 04:02:07.040147921 +0000 UTC m=+919.727228370" watchObservedRunningTime="2026-01-31 04:02:07.044650102 +0000 UTC m=+919.731730551" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.046190 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-782zz" event={"ID":"fb454f09-c6b8-41f4-b69f-3125e8d4d79f","Type":"ContainerStarted","Data":"c4a612d74844b2e8f30a0ee8094336ced39656151bca8789fcefa3706932084d"} Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.046808 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-782zz" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.047591 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2z575" podStartSLOduration=3.817126449 podStartE2EDuration="17.047586207s" podCreationTimestamp="2026-01-31 04:01:50 +0000 UTC" firstStartedPulling="2026-01-31 04:01:52.1284442 +0000 UTC m=+904.815524649" lastFinishedPulling="2026-01-31 04:02:05.358903958 +0000 UTC m=+918.045984407" observedRunningTime="2026-01-31 04:02:07.012192811 +0000 UTC m=+919.699273260" watchObservedRunningTime="2026-01-31 04:02:07.047586207 +0000 UTC m=+919.734666656" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.060779 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hprpc" event={"ID":"fe50fb01-1097-4ac9-81ae-fdfc96842f68","Type":"ContainerStarted","Data":"16c37ee9b56ffdbca368abfba1a798c445ffce0459e19928e5c19ced710dd8c8"} Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.061452 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hprpc" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.068219 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-r2ljw" podStartSLOduration=4.208537241 podStartE2EDuration="18.068208044s" podCreationTimestamp="2026-01-31 04:01:49 +0000 UTC" firstStartedPulling="2026-01-31 04:01:51.499298987 +0000 UTC m=+904.186379436" lastFinishedPulling="2026-01-31 04:02:05.35896979 +0000 UTC m=+918.046050239" observedRunningTime="2026-01-31 04:02:07.061727537 +0000 UTC m=+919.748807986" watchObservedRunningTime="2026-01-31 04:02:07.068208044 +0000 UTC m=+919.755288493" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.077811 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8hvrl" event={"ID":"efcd65a1-b55c-4cf6-bfe7-5e888e2bc7f0","Type":"ContainerStarted","Data":"a45b0ae9b297fb9cae71f05327f51925d4cbffaec36a1fe3f6fea1699bc51cc3"} Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.078056 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8hvrl" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.080468 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k469j" event={"ID":"1ee58492-27e7-446f-84c8-c3b0b74884fa","Type":"ContainerStarted","Data":"c9bd0e0299d006d1380abd81b40e93cb89ee008da5dafdc803465c19ec4f5f25"} Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.080821 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k469j" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.090635 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-dvj6j" event={"ID":"7f0021a0-f8df-42fa-8ef0-34653130a6e9","Type":"ContainerStarted","Data":"535211ef208baf1e30692050809ef37aaed98e8567d97c3364dd21e1ddfd129d"} Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.091248 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-dvj6j" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.106777 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wwvbx" event={"ID":"bbf882c7-842b-46eb-a459-bb628db2598f","Type":"ContainerStarted","Data":"b99b358fb618c4563c78a729bdf1908cf35e566f1f59996b903420f7ed67d396"} Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.107376 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wwvbx" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.128707 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9gs2r" event={"ID":"0af88c77-1c9c-4072-b0da-707bca0f4f12","Type":"ContainerStarted","Data":"927f9ee6ff3f5bc8dd863c43259162c28e3f79bee2af66912d26b9e87dff4997"} Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.129367 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9gs2r" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.139727 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-782zz" podStartSLOduration=3.742907504 podStartE2EDuration="17.139712116s" podCreationTimestamp="2026-01-31 04:01:50 +0000 UTC" firstStartedPulling="2026-01-31 04:01:51.956436462 +0000 UTC m=+904.643516911" lastFinishedPulling="2026-01-31 04:02:05.353241084 +0000 UTC m=+918.040321523" observedRunningTime="2026-01-31 04:02:07.099199722 +0000 UTC m=+919.786280171" watchObservedRunningTime="2026-01-31 04:02:07.139712116 +0000 UTC m=+919.826792565" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.169170 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7489d7c99b-75s7f" event={"ID":"74e68a52-8f24-4ff0-a160-8a1ad61238c9","Type":"ContainerStarted","Data":"0fa7f75abef5c2b184e4287d981a4dd0e3c7c89dfb5132aad8ff3f79d00ff795"} Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.169805 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7489d7c99b-75s7f" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.176312 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hprpc" podStartSLOduration=4.303580841 podStartE2EDuration="18.176300896s" podCreationTimestamp="2026-01-31 04:01:49 +0000 UTC" firstStartedPulling="2026-01-31 04:01:51.480962572 +0000 UTC m=+904.168043021" lastFinishedPulling="2026-01-31 04:02:05.353682627 +0000 UTC m=+918.040763076" observedRunningTime="2026-01-31 04:02:07.143154275 +0000 UTC m=+919.830234724" watchObservedRunningTime="2026-01-31 04:02:07.176300896 +0000 UTC m=+919.863381345" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.181192 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4snkb" event={"ID":"8d904b59-3b07-422e-a83b-a02ac443d6eb","Type":"ContainerStarted","Data":"7130ab933a6864e6f56490415f967d4d05b28f91fc809d3c6466b254a5b0219c"} Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.181778 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4snkb" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.198175 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.215939 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wwvbx" podStartSLOduration=5.023836436 podStartE2EDuration="18.215923323s" podCreationTimestamp="2026-01-31 04:01:49 +0000 UTC" firstStartedPulling="2026-01-31 04:01:52.186967965 +0000 UTC m=+904.874048414" lastFinishedPulling="2026-01-31 04:02:05.379054852 +0000 UTC m=+918.066135301" observedRunningTime="2026-01-31 04:02:07.213311198 +0000 UTC m=+919.900391647" watchObservedRunningTime="2026-01-31 04:02:07.215923323 +0000 UTC m=+919.903003772" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.230924 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9gs2r" podStartSLOduration=3.977929254 podStartE2EDuration="17.230907948s" podCreationTimestamp="2026-01-31 04:01:50 +0000 UTC" firstStartedPulling="2026-01-31 04:01:52.131808544 +0000 UTC m=+904.818888983" lastFinishedPulling="2026-01-31 04:02:05.384787228 +0000 UTC m=+918.071867677" observedRunningTime="2026-01-31 04:02:07.178324145 +0000 UTC m=+919.865404594" watchObservedRunningTime="2026-01-31 04:02:07.230907948 +0000 UTC m=+919.917988397" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.233763 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdrl7" event={"ID":"ea6ee14b-2acc-4894-8d63-57ad4a6a170a","Type":"ContainerStarted","Data":"b4e3da53baf0b76b5e5543b683b7dcf9db31de2170c01826075e3d95d4539881"} Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.234544 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdrl7" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.281694 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k469j" podStartSLOduration=5.049808554 podStartE2EDuration="18.281678589s" podCreationTimestamp="2026-01-31 04:01:49 +0000 UTC" firstStartedPulling="2026-01-31 04:01:52.152393028 +0000 UTC m=+904.839473477" lastFinishedPulling="2026-01-31 04:02:05.384263063 +0000 UTC m=+918.071343512" observedRunningTime="2026-01-31 04:02:07.264632815 +0000 UTC m=+919.951713264" watchObservedRunningTime="2026-01-31 04:02:07.281678589 +0000 UTC m=+919.968759038" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.282595 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-dvj6j" podStartSLOduration=4.036057526 podStartE2EDuration="17.282589286s" podCreationTimestamp="2026-01-31 04:01:50 +0000 UTC" firstStartedPulling="2026-01-31 04:01:52.152428829 +0000 UTC m=+904.839509278" lastFinishedPulling="2026-01-31 04:02:05.398960589 +0000 UTC m=+918.086041038" observedRunningTime="2026-01-31 04:02:07.279465615 +0000 UTC m=+919.966546064" watchObservedRunningTime="2026-01-31 04:02:07.282589286 +0000 UTC m=+919.969669735" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.325562 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8hvrl" podStartSLOduration=5.074650121 podStartE2EDuration="18.32553411s" podCreationTimestamp="2026-01-31 04:01:49 +0000 UTC" firstStartedPulling="2026-01-31 04:01:52.161159329 +0000 UTC m=+904.848239778" lastFinishedPulling="2026-01-31 04:02:05.412043318 +0000 UTC m=+918.099123767" observedRunningTime="2026-01-31 04:02:07.323281724 +0000 UTC m=+920.010362173" watchObservedRunningTime="2026-01-31 04:02:07.32553411 +0000 UTC m=+920.012614559" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.365634 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7489d7c99b-75s7f" podStartSLOduration=5.159539073 podStartE2EDuration="18.365603841s" podCreationTimestamp="2026-01-31 04:01:49 +0000 UTC" firstStartedPulling="2026-01-31 04:01:52.167433753 +0000 UTC m=+904.854514202" lastFinishedPulling="2026-01-31 04:02:05.373498521 +0000 UTC m=+918.060578970" observedRunningTime="2026-01-31 04:02:07.348906197 +0000 UTC m=+920.035986636" watchObservedRunningTime="2026-01-31 04:02:07.365603841 +0000 UTC m=+920.052684290" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.385300 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4snkb" podStartSLOduration=3.979824347 podStartE2EDuration="17.385269731s" podCreationTimestamp="2026-01-31 04:01:50 +0000 UTC" firstStartedPulling="2026-01-31 04:01:52.147042123 +0000 UTC m=+904.834122572" lastFinishedPulling="2026-01-31 04:02:05.552487497 +0000 UTC m=+918.239567956" observedRunningTime="2026-01-31 04:02:07.363911552 +0000 UTC m=+920.050992001" watchObservedRunningTime="2026-01-31 04:02:07.385269731 +0000 UTC m=+920.072350190" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.398059 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdrl7" podStartSLOduration=3.923770898 podStartE2EDuration="17.39803767s" podCreationTimestamp="2026-01-31 04:01:50 +0000 UTC" firstStartedPulling="2026-01-31 04:01:51.878864119 +0000 UTC m=+904.565944568" lastFinishedPulling="2026-01-31 04:02:05.353130891 +0000 UTC m=+918.040211340" observedRunningTime="2026-01-31 04:02:07.38284463 +0000 UTC m=+920.069925079" watchObservedRunningTime="2026-01-31 04:02:07.39803767 +0000 UTC m=+920.085118119" Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.458785 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf"] Jan 31 04:02:07 crc kubenswrapper[4827]: W0131 04:02:07.477140 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff81629a_d048_4c5d_b3a4_b892310ceff7.slice/crio-f1151b49c26031ebc67696a5c79798851d3fb0fc6af515c0631c25fda5d30e54 WatchSource:0}: Error finding container f1151b49c26031ebc67696a5c79798851d3fb0fc6af515c0631c25fda5d30e54: Status 404 returned error can't find the container with id f1151b49c26031ebc67696a5c79798851d3fb0fc6af515c0631c25fda5d30e54 Jan 31 04:02:07 crc kubenswrapper[4827]: I0131 04:02:07.756715 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd"] Jan 31 04:02:08 crc kubenswrapper[4827]: I0131 04:02:08.256687 4827 generic.go:334] "Generic (PLEG): container finished" podID="0c61dfb4-4c69-4257-b957-09fde25ca424" containerID="bbf179a2224fa51aa2361ca2bbed1397d9305e9f20659ef9a9c897490127cd90" exitCode=0 Jan 31 04:02:08 crc kubenswrapper[4827]: I0131 04:02:08.257916 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cs46v" event={"ID":"0c61dfb4-4c69-4257-b957-09fde25ca424","Type":"ContainerDied","Data":"bbf179a2224fa51aa2361ca2bbed1397d9305e9f20659ef9a9c897490127cd90"} Jan 31 04:02:08 crc kubenswrapper[4827]: I0131 04:02:08.267655 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" event={"ID":"a7d7d7a5-296a-43d3-8c15-906a257549c2","Type":"ContainerStarted","Data":"3e1bf7e9fb3547bd3825b19c4900b0f84f1adefc2ccbb57f0c1cafe9e4a880a0"} Jan 31 04:02:08 crc kubenswrapper[4827]: I0131 04:02:08.278691 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-gcs7k" event={"ID":"00f00c32-1e04-42e4-95b4-923c6b57386e","Type":"ContainerStarted","Data":"edd175804096876a2ac06465ff8806d2afe33e5e1f4b07dbcafbdded645aabbc"} Jan 31 04:02:08 crc kubenswrapper[4827]: I0131 04:02:08.283406 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" event={"ID":"ff81629a-d048-4c5d-b3a4-b892310ceff7","Type":"ContainerStarted","Data":"f1151b49c26031ebc67696a5c79798851d3fb0fc6af515c0631c25fda5d30e54"} Jan 31 04:02:09 crc kubenswrapper[4827]: I0131 04:02:09.292704 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" event={"ID":"a7d7d7a5-296a-43d3-8c15-906a257549c2","Type":"ContainerStarted","Data":"a375475c4416ad04524784de86317a49feb4fc10913297cefce976327f126f2f"} Jan 31 04:02:09 crc kubenswrapper[4827]: I0131 04:02:09.294269 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:02:09 crc kubenswrapper[4827]: I0131 04:02:09.330463 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" podStartSLOduration=19.330446211 podStartE2EDuration="19.330446211s" podCreationTimestamp="2026-01-31 04:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:02:09.324623312 +0000 UTC m=+922.011703761" watchObservedRunningTime="2026-01-31 04:02:09.330446211 +0000 UTC m=+922.017526660" Jan 31 04:02:11 crc kubenswrapper[4827]: I0131 04:02:11.079070 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-782zz" Jan 31 04:02:11 crc kubenswrapper[4827]: I0131 04:02:11.139439 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9gs2r" Jan 31 04:02:17 crc kubenswrapper[4827]: I0131 04:02:17.206245 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-794bbdbc56-fvlbd" Jan 31 04:02:17 crc kubenswrapper[4827]: I0131 04:02:17.371770 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:02:17 crc kubenswrapper[4827]: I0131 04:02:17.371832 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:02:20 crc kubenswrapper[4827]: I0131 04:02:20.157408 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k469j" Jan 31 04:02:20 crc kubenswrapper[4827]: I0131 04:02:20.194545 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7489d7c99b-75s7f" Jan 31 04:02:20 crc kubenswrapper[4827]: I0131 04:02:20.215098 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-9k4dq" Jan 31 04:02:20 crc kubenswrapper[4827]: I0131 04:02:20.323219 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hprpc" Jan 31 04:02:20 crc kubenswrapper[4827]: I0131 04:02:20.346252 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wwvbx" Jan 31 04:02:20 crc kubenswrapper[4827]: I0131 04:02:20.468507 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-r2ljw" Jan 31 04:02:20 crc kubenswrapper[4827]: I0131 04:02:20.537575 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8hvrl" Jan 31 04:02:20 crc kubenswrapper[4827]: I0131 04:02:20.559973 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2z575" Jan 31 04:02:20 crc kubenswrapper[4827]: I0131 04:02:20.606658 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-dvj6j" Jan 31 04:02:20 crc kubenswrapper[4827]: I0131 04:02:20.647155 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdrl7" Jan 31 04:02:20 crc kubenswrapper[4827]: I0131 04:02:20.960987 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4snkb" Jan 31 04:02:21 crc kubenswrapper[4827]: E0131 04:02:21.031933 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 31 04:02:21 crc kubenswrapper[4827]: E0131 04:02:21.032309 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8m5vt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-fr6qf_openstack-operators(0d53929a-c249-47fa-9d02-98021a8bcf2a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:02:21 crc kubenswrapper[4827]: E0131 04:02:21.033470 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fr6qf" podUID="0d53929a-c249-47fa-9d02-98021a8bcf2a" Jan 31 04:02:30 crc kubenswrapper[4827]: E0131 04:02:30.743277 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Jan 31 04:02:30 crc kubenswrapper[4827]: E0131 04:02:30.743976 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-prslc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-m97nw_openstack-operators(5666901d-66a6-4282-b44c-c39a0721faa2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:02:30 crc kubenswrapper[4827]: E0131 04:02:30.745183 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-m97nw" podUID="5666901d-66a6-4282-b44c-c39a0721faa2" Jan 31 04:02:31 crc kubenswrapper[4827]: E0131 04:02:31.636399 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 31 04:02:31 crc kubenswrapper[4827]: E0131 04:02:31.636802 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g4x6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-zdjjp_openstack-operators(0d85c53f-5192-4621-86cc-d9403773713b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:02:31 crc kubenswrapper[4827]: E0131 04:02:31.638276 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zdjjp" podUID="0d85c53f-5192-4621-86cc-d9403773713b" Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.098234 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l66j9"] Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.100228 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l66j9" Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.122854 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l66j9"] Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.130158 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-485z6\" (UniqueName: \"kubernetes.io/projected/4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a-kube-api-access-485z6\") pod \"certified-operators-l66j9\" (UID: \"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a\") " pod="openshift-marketplace/certified-operators-l66j9" Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.130431 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a-catalog-content\") pod \"certified-operators-l66j9\" (UID: \"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a\") " pod="openshift-marketplace/certified-operators-l66j9" Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.130521 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a-utilities\") pod \"certified-operators-l66j9\" (UID: \"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a\") " pod="openshift-marketplace/certified-operators-l66j9" Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.231645 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-485z6\" (UniqueName: \"kubernetes.io/projected/4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a-kube-api-access-485z6\") pod \"certified-operators-l66j9\" (UID: \"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a\") " pod="openshift-marketplace/certified-operators-l66j9" Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.231711 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a-catalog-content\") pod \"certified-operators-l66j9\" (UID: \"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a\") " pod="openshift-marketplace/certified-operators-l66j9" Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.231735 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a-utilities\") pod \"certified-operators-l66j9\" (UID: \"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a\") " pod="openshift-marketplace/certified-operators-l66j9" Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.232281 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a-utilities\") pod \"certified-operators-l66j9\" (UID: \"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a\") " pod="openshift-marketplace/certified-operators-l66j9" Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.232493 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a-catalog-content\") pod \"certified-operators-l66j9\" (UID: \"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a\") " pod="openshift-marketplace/certified-operators-l66j9" Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.259002 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-485z6\" (UniqueName: \"kubernetes.io/projected/4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a-kube-api-access-485z6\") pod \"certified-operators-l66j9\" (UID: \"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a\") " pod="openshift-marketplace/certified-operators-l66j9" Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.419566 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l66j9" Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.476671 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" event={"ID":"ff81629a-d048-4c5d-b3a4-b892310ceff7","Type":"ContainerStarted","Data":"c17c34136586d8d4a72363635a490000a329ec53971ea4850553a8026b0d2293"} Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.478377 4827 generic.go:334] "Generic (PLEG): container finished" podID="0c61dfb4-4c69-4257-b957-09fde25ca424" containerID="b83296d6e68fb918eb82d063f11aa1a1114139cd65028d430e257744a97afa6c" exitCode=0 Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.478457 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cs46v" event={"ID":"0c61dfb4-4c69-4257-b957-09fde25ca424","Type":"ContainerDied","Data":"b83296d6e68fb918eb82d063f11aa1a1114139cd65028d430e257744a97afa6c"} Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.480431 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-k7f4f" event={"ID":"b3c58b9c-4561-49ae-a23c-a77a34b8cfb5","Type":"ContainerStarted","Data":"cde7e13badfe1f25bc32a8600a0ef052e5ebb4284b8501c4a16420bd61d6686d"} Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.489053 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-plj6q" event={"ID":"4d581cf6-c77f-4757-9091-cb1e23bfbcda","Type":"ContainerStarted","Data":"fbf4c065c493104303ffe1e0c39fd9ea21f925050e6cc45141f121ed52042e4d"} Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.489265 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-plj6q" Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.501712 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-gcs7k" event={"ID":"00f00c32-1e04-42e4-95b4-923c6b57386e","Type":"ContainerStarted","Data":"082053a0a21ff6a47a4c99d715175561a3f702725db423ec3e119386b88b1c5d"} Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.501853 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-gcs7k" Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.535092 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-zdtlh" event={"ID":"c0c17a5a-5f0d-421e-b29c-56c4f2626a7b","Type":"ContainerStarted","Data":"8effa378dddc672096f3c9c3313a7d70504ff31f7a7fec91ecdf58cde16ab3e9"} Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.544363 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-plj6q" podStartSLOduration=3.527035474 podStartE2EDuration="42.544342739s" podCreationTimestamp="2026-01-31 04:01:50 +0000 UTC" firstStartedPulling="2026-01-31 04:01:52.219639263 +0000 UTC m=+904.906719712" lastFinishedPulling="2026-01-31 04:02:31.236946488 +0000 UTC m=+943.924026977" observedRunningTime="2026-01-31 04:02:32.540734575 +0000 UTC m=+945.227815024" watchObservedRunningTime="2026-01-31 04:02:32.544342739 +0000 UTC m=+945.231423188" Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.568277 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-6jhd8" event={"ID":"ddb4ccbd-d7ed-4c26-97c4-22ce6c38b431","Type":"ContainerStarted","Data":"7ed4e4d378941533221aefc700a5c00fbbb1710d0f481afe9a6718f34dc98f8d"} Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.568900 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-6jhd8" Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.569726 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-gcs7k" podStartSLOduration=19.342370752 podStartE2EDuration="43.569707304s" podCreationTimestamp="2026-01-31 04:01:49 +0000 UTC" firstStartedPulling="2026-01-31 04:02:07.004798747 +0000 UTC m=+919.691879196" lastFinishedPulling="2026-01-31 04:02:31.232135289 +0000 UTC m=+943.919215748" observedRunningTime="2026-01-31 04:02:32.569120707 +0000 UTC m=+945.256201156" watchObservedRunningTime="2026-01-31 04:02:32.569707304 +0000 UTC m=+945.256787753" Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.589823 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-6jhd8" podStartSLOduration=3.606280866 podStartE2EDuration="42.589808397s" podCreationTimestamp="2026-01-31 04:01:50 +0000 UTC" firstStartedPulling="2026-01-31 04:01:52.249245967 +0000 UTC m=+904.936326426" lastFinishedPulling="2026-01-31 04:02:31.232773468 +0000 UTC m=+943.919853957" observedRunningTime="2026-01-31 04:02:32.589716054 +0000 UTC m=+945.276796503" watchObservedRunningTime="2026-01-31 04:02:32.589808397 +0000 UTC m=+945.276888846" Jan 31 04:02:32 crc kubenswrapper[4827]: I0131 04:02:32.763043 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l66j9"] Jan 31 04:02:32 crc kubenswrapper[4827]: W0131 04:02:32.771902 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fca145a_27ee_4b4e_9cb0_ca1c6d2c2f3a.slice/crio-bf825f734e82ccda7e4d347d05b0aa08916a373b1508565be959eff9f2f8087b WatchSource:0}: Error finding container bf825f734e82ccda7e4d347d05b0aa08916a373b1508565be959eff9f2f8087b: Status 404 returned error can't find the container with id bf825f734e82ccda7e4d347d05b0aa08916a373b1508565be959eff9f2f8087b Jan 31 04:02:33 crc kubenswrapper[4827]: I0131 04:02:33.577543 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l66j9" event={"ID":"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a","Type":"ContainerStarted","Data":"36ff0a6b869f89e935a89f9ff02bd16db84822b90b806e66da8c56c64eb84d7d"} Jan 31 04:02:33 crc kubenswrapper[4827]: I0131 04:02:33.577810 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l66j9" event={"ID":"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a","Type":"ContainerStarted","Data":"bf825f734e82ccda7e4d347d05b0aa08916a373b1508565be959eff9f2f8087b"} Jan 31 04:02:33 crc kubenswrapper[4827]: I0131 04:02:33.578084 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-k7f4f" Jan 31 04:02:33 crc kubenswrapper[4827]: I0131 04:02:33.601390 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-k7f4f" podStartSLOduration=4.583904187 podStartE2EDuration="43.601375897s" podCreationTimestamp="2026-01-31 04:01:50 +0000 UTC" firstStartedPulling="2026-01-31 04:01:52.219576831 +0000 UTC m=+904.906657280" lastFinishedPulling="2026-01-31 04:02:31.237048491 +0000 UTC m=+943.924128990" observedRunningTime="2026-01-31 04:02:33.597640118 +0000 UTC m=+946.284720567" watchObservedRunningTime="2026-01-31 04:02:33.601375897 +0000 UTC m=+946.288456346" Jan 31 04:02:33 crc kubenswrapper[4827]: I0131 04:02:33.629843 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" podStartSLOduration=19.880813236 podStartE2EDuration="43.62982539s" podCreationTimestamp="2026-01-31 04:01:50 +0000 UTC" firstStartedPulling="2026-01-31 04:02:07.489042277 +0000 UTC m=+920.176122726" lastFinishedPulling="2026-01-31 04:02:31.238054421 +0000 UTC m=+943.925134880" observedRunningTime="2026-01-31 04:02:33.621510879 +0000 UTC m=+946.308591328" watchObservedRunningTime="2026-01-31 04:02:33.62982539 +0000 UTC m=+946.316905839" Jan 31 04:02:33 crc kubenswrapper[4827]: I0131 04:02:33.642449 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-zdtlh" podStartSLOduration=5.613731151 podStartE2EDuration="44.642432236s" podCreationTimestamp="2026-01-31 04:01:49 +0000 UTC" firstStartedPulling="2026-01-31 04:01:52.203441294 +0000 UTC m=+904.890521743" lastFinishedPulling="2026-01-31 04:02:31.232142369 +0000 UTC m=+943.919222828" observedRunningTime="2026-01-31 04:02:33.636149104 +0000 UTC m=+946.323229553" watchObservedRunningTime="2026-01-31 04:02:33.642432236 +0000 UTC m=+946.329512685" Jan 31 04:02:34 crc kubenswrapper[4827]: I0131 04:02:34.584843 4827 generic.go:334] "Generic (PLEG): container finished" podID="4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a" containerID="36ff0a6b869f89e935a89f9ff02bd16db84822b90b806e66da8c56c64eb84d7d" exitCode=0 Jan 31 04:02:34 crc kubenswrapper[4827]: I0131 04:02:34.584947 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l66j9" event={"ID":"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a","Type":"ContainerDied","Data":"36ff0a6b869f89e935a89f9ff02bd16db84822b90b806e66da8c56c64eb84d7d"} Jan 31 04:02:34 crc kubenswrapper[4827]: I0131 04:02:34.585311 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l66j9" event={"ID":"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a","Type":"ContainerStarted","Data":"3d4ffa7a015005ab1983c30599cf85e9d40dacc17006f6502d7c2977a0a5f9d9"} Jan 31 04:02:34 crc kubenswrapper[4827]: I0131 04:02:34.587746 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cs46v" event={"ID":"0c61dfb4-4c69-4257-b957-09fde25ca424","Type":"ContainerStarted","Data":"9cac494b5d794562bffe51989c4d93eb5c3be942bb72e3fb5d9b9f1efbd6ab67"} Jan 31 04:02:34 crc kubenswrapper[4827]: I0131 04:02:34.630383 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cs46v" podStartSLOduration=4.33248591 podStartE2EDuration="29.630366121s" podCreationTimestamp="2026-01-31 04:02:05 +0000 UTC" firstStartedPulling="2026-01-31 04:02:08.259065088 +0000 UTC m=+920.946145537" lastFinishedPulling="2026-01-31 04:02:33.556945299 +0000 UTC m=+946.244025748" observedRunningTime="2026-01-31 04:02:34.625720376 +0000 UTC m=+947.312800845" watchObservedRunningTime="2026-01-31 04:02:34.630366121 +0000 UTC m=+947.317446570" Jan 31 04:02:35 crc kubenswrapper[4827]: I0131 04:02:35.597401 4827 generic.go:334] "Generic (PLEG): container finished" podID="4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a" containerID="3d4ffa7a015005ab1983c30599cf85e9d40dacc17006f6502d7c2977a0a5f9d9" exitCode=0 Jan 31 04:02:35 crc kubenswrapper[4827]: I0131 04:02:35.597505 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l66j9" event={"ID":"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a","Type":"ContainerDied","Data":"3d4ffa7a015005ab1983c30599cf85e9d40dacc17006f6502d7c2977a0a5f9d9"} Jan 31 04:02:35 crc kubenswrapper[4827]: I0131 04:02:35.801029 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cs46v" Jan 31 04:02:35 crc kubenswrapper[4827]: I0131 04:02:35.801418 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cs46v" Jan 31 04:02:35 crc kubenswrapper[4827]: I0131 04:02:35.863421 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cs46v" Jan 31 04:02:36 crc kubenswrapper[4827]: E0131 04:02:36.110409 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fr6qf" podUID="0d53929a-c249-47fa-9d02-98021a8bcf2a" Jan 31 04:02:36 crc kubenswrapper[4827]: I0131 04:02:36.605483 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l66j9" event={"ID":"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a","Type":"ContainerStarted","Data":"4f0fbb99d0ea3e6735c6a2589840f36d8c5336698d49b05aa5326a7889795f94"} Jan 31 04:02:36 crc kubenswrapper[4827]: I0131 04:02:36.625618 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l66j9" podStartSLOduration=2.215209331 podStartE2EDuration="4.625600681s" podCreationTimestamp="2026-01-31 04:02:32 +0000 UTC" firstStartedPulling="2026-01-31 04:02:33.58010652 +0000 UTC m=+946.267186969" lastFinishedPulling="2026-01-31 04:02:35.99049787 +0000 UTC m=+948.677578319" observedRunningTime="2026-01-31 04:02:36.622143082 +0000 UTC m=+949.309223571" watchObservedRunningTime="2026-01-31 04:02:36.625600681 +0000 UTC m=+949.312681130" Jan 31 04:02:36 crc kubenswrapper[4827]: I0131 04:02:36.700852 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" Jan 31 04:02:40 crc kubenswrapper[4827]: I0131 04:02:40.287475 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-zdtlh" Jan 31 04:02:40 crc kubenswrapper[4827]: I0131 04:02:40.290460 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-zdtlh" Jan 31 04:02:41 crc kubenswrapper[4827]: I0131 04:02:41.013147 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-k7f4f" Jan 31 04:02:41 crc kubenswrapper[4827]: I0131 04:02:41.164347 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-6jhd8" Jan 31 04:02:41 crc kubenswrapper[4827]: I0131 04:02:41.187499 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-plj6q" Jan 31 04:02:42 crc kubenswrapper[4827]: E0131 04:02:42.111433 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zdjjp" podUID="0d85c53f-5192-4621-86cc-d9403773713b" Jan 31 04:02:42 crc kubenswrapper[4827]: I0131 04:02:42.420273 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l66j9" Jan 31 04:02:42 crc kubenswrapper[4827]: I0131 04:02:42.420346 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l66j9" Jan 31 04:02:42 crc kubenswrapper[4827]: I0131 04:02:42.486757 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l66j9" Jan 31 04:02:42 crc kubenswrapper[4827]: I0131 04:02:42.685449 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l66j9" Jan 31 04:02:42 crc kubenswrapper[4827]: I0131 04:02:42.731088 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l66j9"] Jan 31 04:02:44 crc kubenswrapper[4827]: I0131 04:02:44.654789 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l66j9" podUID="4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a" containerName="registry-server" containerID="cri-o://4f0fbb99d0ea3e6735c6a2589840f36d8c5336698d49b05aa5326a7889795f94" gracePeriod=2 Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.070313 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l66j9" Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.240038 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a-catalog-content\") pod \"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a\" (UID: \"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a\") " Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.240416 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a-utilities\") pod \"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a\" (UID: \"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a\") " Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.240581 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-485z6\" (UniqueName: \"kubernetes.io/projected/4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a-kube-api-access-485z6\") pod \"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a\" (UID: \"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a\") " Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.241951 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a-utilities" (OuterVolumeSpecName: "utilities") pod "4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a" (UID: "4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.254068 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a-kube-api-access-485z6" (OuterVolumeSpecName: "kube-api-access-485z6") pod "4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a" (UID: "4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a"). InnerVolumeSpecName "kube-api-access-485z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.293576 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a" (UID: "4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.342816 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.342852 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.342863 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-485z6\" (UniqueName: \"kubernetes.io/projected/4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a-kube-api-access-485z6\") on node \"crc\" DevicePath \"\"" Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.662718 4827 generic.go:334] "Generic (PLEG): container finished" podID="4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a" containerID="4f0fbb99d0ea3e6735c6a2589840f36d8c5336698d49b05aa5326a7889795f94" exitCode=0 Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.662766 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l66j9" event={"ID":"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a","Type":"ContainerDied","Data":"4f0fbb99d0ea3e6735c6a2589840f36d8c5336698d49b05aa5326a7889795f94"} Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.662773 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l66j9" Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.662801 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l66j9" event={"ID":"4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a","Type":"ContainerDied","Data":"bf825f734e82ccda7e4d347d05b0aa08916a373b1508565be959eff9f2f8087b"} Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.662825 4827 scope.go:117] "RemoveContainer" containerID="4f0fbb99d0ea3e6735c6a2589840f36d8c5336698d49b05aa5326a7889795f94" Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.697165 4827 scope.go:117] "RemoveContainer" containerID="3d4ffa7a015005ab1983c30599cf85e9d40dacc17006f6502d7c2977a0a5f9d9" Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.717112 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l66j9"] Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.726362 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l66j9"] Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.727973 4827 scope.go:117] "RemoveContainer" containerID="36ff0a6b869f89e935a89f9ff02bd16db84822b90b806e66da8c56c64eb84d7d" Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.749496 4827 scope.go:117] "RemoveContainer" containerID="4f0fbb99d0ea3e6735c6a2589840f36d8c5336698d49b05aa5326a7889795f94" Jan 31 04:02:45 crc kubenswrapper[4827]: E0131 04:02:45.749933 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f0fbb99d0ea3e6735c6a2589840f36d8c5336698d49b05aa5326a7889795f94\": container with ID starting with 4f0fbb99d0ea3e6735c6a2589840f36d8c5336698d49b05aa5326a7889795f94 not found: ID does not exist" containerID="4f0fbb99d0ea3e6735c6a2589840f36d8c5336698d49b05aa5326a7889795f94" Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.749972 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f0fbb99d0ea3e6735c6a2589840f36d8c5336698d49b05aa5326a7889795f94"} err="failed to get container status \"4f0fbb99d0ea3e6735c6a2589840f36d8c5336698d49b05aa5326a7889795f94\": rpc error: code = NotFound desc = could not find container \"4f0fbb99d0ea3e6735c6a2589840f36d8c5336698d49b05aa5326a7889795f94\": container with ID starting with 4f0fbb99d0ea3e6735c6a2589840f36d8c5336698d49b05aa5326a7889795f94 not found: ID does not exist" Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.750001 4827 scope.go:117] "RemoveContainer" containerID="3d4ffa7a015005ab1983c30599cf85e9d40dacc17006f6502d7c2977a0a5f9d9" Jan 31 04:02:45 crc kubenswrapper[4827]: E0131 04:02:45.750460 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d4ffa7a015005ab1983c30599cf85e9d40dacc17006f6502d7c2977a0a5f9d9\": container with ID starting with 3d4ffa7a015005ab1983c30599cf85e9d40dacc17006f6502d7c2977a0a5f9d9 not found: ID does not exist" containerID="3d4ffa7a015005ab1983c30599cf85e9d40dacc17006f6502d7c2977a0a5f9d9" Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.750497 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d4ffa7a015005ab1983c30599cf85e9d40dacc17006f6502d7c2977a0a5f9d9"} err="failed to get container status \"3d4ffa7a015005ab1983c30599cf85e9d40dacc17006f6502d7c2977a0a5f9d9\": rpc error: code = NotFound desc = could not find container \"3d4ffa7a015005ab1983c30599cf85e9d40dacc17006f6502d7c2977a0a5f9d9\": container with ID starting with 3d4ffa7a015005ab1983c30599cf85e9d40dacc17006f6502d7c2977a0a5f9d9 not found: ID does not exist" Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.750524 4827 scope.go:117] "RemoveContainer" containerID="36ff0a6b869f89e935a89f9ff02bd16db84822b90b806e66da8c56c64eb84d7d" Jan 31 04:02:45 crc kubenswrapper[4827]: E0131 04:02:45.750891 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36ff0a6b869f89e935a89f9ff02bd16db84822b90b806e66da8c56c64eb84d7d\": container with ID starting with 36ff0a6b869f89e935a89f9ff02bd16db84822b90b806e66da8c56c64eb84d7d not found: ID does not exist" containerID="36ff0a6b869f89e935a89f9ff02bd16db84822b90b806e66da8c56c64eb84d7d" Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.750921 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36ff0a6b869f89e935a89f9ff02bd16db84822b90b806e66da8c56c64eb84d7d"} err="failed to get container status \"36ff0a6b869f89e935a89f9ff02bd16db84822b90b806e66da8c56c64eb84d7d\": rpc error: code = NotFound desc = could not find container \"36ff0a6b869f89e935a89f9ff02bd16db84822b90b806e66da8c56c64eb84d7d\": container with ID starting with 36ff0a6b869f89e935a89f9ff02bd16db84822b90b806e66da8c56c64eb84d7d not found: ID does not exist" Jan 31 04:02:45 crc kubenswrapper[4827]: I0131 04:02:45.839871 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cs46v" Jan 31 04:02:46 crc kubenswrapper[4827]: E0131 04:02:46.111327 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-m97nw" podUID="5666901d-66a6-4282-b44c-c39a0721faa2" Jan 31 04:02:46 crc kubenswrapper[4827]: I0131 04:02:46.117476 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a" path="/var/lib/kubelet/pods/4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a/volumes" Jan 31 04:02:46 crc kubenswrapper[4827]: I0131 04:02:46.340898 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-gcs7k" Jan 31 04:02:46 crc kubenswrapper[4827]: I0131 04:02:46.711603 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf" Jan 31 04:02:47 crc kubenswrapper[4827]: I0131 04:02:47.370746 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:02:47 crc kubenswrapper[4827]: I0131 04:02:47.370827 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:02:47 crc kubenswrapper[4827]: I0131 04:02:47.370913 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 04:02:47 crc kubenswrapper[4827]: I0131 04:02:47.371563 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b3f2ce1bddb590379802c11a41342b77994eb27a657cdaa9086c8e7edd46b860"} pod="openshift-machine-config-operator/machine-config-daemon-jxh94" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:02:47 crc kubenswrapper[4827]: I0131 04:02:47.371649 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" containerID="cri-o://b3f2ce1bddb590379802c11a41342b77994eb27a657cdaa9086c8e7edd46b860" gracePeriod=600 Jan 31 04:02:47 crc kubenswrapper[4827]: I0131 04:02:47.685585 4827 generic.go:334] "Generic (PLEG): container finished" podID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerID="b3f2ce1bddb590379802c11a41342b77994eb27a657cdaa9086c8e7edd46b860" exitCode=0 Jan 31 04:02:47 crc kubenswrapper[4827]: I0131 04:02:47.685642 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerDied","Data":"b3f2ce1bddb590379802c11a41342b77994eb27a657cdaa9086c8e7edd46b860"} Jan 31 04:02:47 crc kubenswrapper[4827]: I0131 04:02:47.685696 4827 scope.go:117] "RemoveContainer" containerID="bfaefcdaba61a9df67ef38340b2b8e90d41a85b4a9bee50aad5651159c3ae7f7" Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.127490 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cs46v"] Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.128001 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cs46v" podUID="0c61dfb4-4c69-4257-b957-09fde25ca424" containerName="registry-server" containerID="cri-o://9cac494b5d794562bffe51989c4d93eb5c3be942bb72e3fb5d9b9f1efbd6ab67" gracePeriod=2 Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.517474 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cs46v" Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.690523 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c61dfb4-4c69-4257-b957-09fde25ca424-catalog-content\") pod \"0c61dfb4-4c69-4257-b957-09fde25ca424\" (UID: \"0c61dfb4-4c69-4257-b957-09fde25ca424\") " Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.690734 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c61dfb4-4c69-4257-b957-09fde25ca424-utilities\") pod \"0c61dfb4-4c69-4257-b957-09fde25ca424\" (UID: \"0c61dfb4-4c69-4257-b957-09fde25ca424\") " Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.690860 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdvnd\" (UniqueName: \"kubernetes.io/projected/0c61dfb4-4c69-4257-b957-09fde25ca424-kube-api-access-kdvnd\") pod \"0c61dfb4-4c69-4257-b957-09fde25ca424\" (UID: \"0c61dfb4-4c69-4257-b957-09fde25ca424\") " Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.692449 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c61dfb4-4c69-4257-b957-09fde25ca424-utilities" (OuterVolumeSpecName: "utilities") pod "0c61dfb4-4c69-4257-b957-09fde25ca424" (UID: "0c61dfb4-4c69-4257-b957-09fde25ca424"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.700275 4827 generic.go:334] "Generic (PLEG): container finished" podID="0c61dfb4-4c69-4257-b957-09fde25ca424" containerID="9cac494b5d794562bffe51989c4d93eb5c3be942bb72e3fb5d9b9f1efbd6ab67" exitCode=0 Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.700366 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cs46v" event={"ID":"0c61dfb4-4c69-4257-b957-09fde25ca424","Type":"ContainerDied","Data":"9cac494b5d794562bffe51989c4d93eb5c3be942bb72e3fb5d9b9f1efbd6ab67"} Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.700382 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cs46v" Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.700419 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cs46v" event={"ID":"0c61dfb4-4c69-4257-b957-09fde25ca424","Type":"ContainerDied","Data":"29ae825bc8c1b1008848e9cef7af1c3bba2ab17d33f0d2c2e73dde9614e792ad"} Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.700442 4827 scope.go:117] "RemoveContainer" containerID="9cac494b5d794562bffe51989c4d93eb5c3be942bb72e3fb5d9b9f1efbd6ab67" Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.704086 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c61dfb4-4c69-4257-b957-09fde25ca424-kube-api-access-kdvnd" (OuterVolumeSpecName: "kube-api-access-kdvnd") pod "0c61dfb4-4c69-4257-b957-09fde25ca424" (UID: "0c61dfb4-4c69-4257-b957-09fde25ca424"). InnerVolumeSpecName "kube-api-access-kdvnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.717148 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"1261fe4f40f38bda861655c66e0801cf569b3be5862d7375b02489d6f6686b06"} Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.737191 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c61dfb4-4c69-4257-b957-09fde25ca424-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c61dfb4-4c69-4257-b957-09fde25ca424" (UID: "0c61dfb4-4c69-4257-b957-09fde25ca424"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.752377 4827 scope.go:117] "RemoveContainer" containerID="b83296d6e68fb918eb82d063f11aa1a1114139cd65028d430e257744a97afa6c" Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.773527 4827 scope.go:117] "RemoveContainer" containerID="bbf179a2224fa51aa2361ca2bbed1397d9305e9f20659ef9a9c897490127cd90" Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.790707 4827 scope.go:117] "RemoveContainer" containerID="9cac494b5d794562bffe51989c4d93eb5c3be942bb72e3fb5d9b9f1efbd6ab67" Jan 31 04:02:48 crc kubenswrapper[4827]: E0131 04:02:48.791283 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cac494b5d794562bffe51989c4d93eb5c3be942bb72e3fb5d9b9f1efbd6ab67\": container with ID starting with 9cac494b5d794562bffe51989c4d93eb5c3be942bb72e3fb5d9b9f1efbd6ab67 not found: ID does not exist" containerID="9cac494b5d794562bffe51989c4d93eb5c3be942bb72e3fb5d9b9f1efbd6ab67" Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.791348 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cac494b5d794562bffe51989c4d93eb5c3be942bb72e3fb5d9b9f1efbd6ab67"} err="failed to get container status \"9cac494b5d794562bffe51989c4d93eb5c3be942bb72e3fb5d9b9f1efbd6ab67\": rpc error: code = NotFound desc = could not find container \"9cac494b5d794562bffe51989c4d93eb5c3be942bb72e3fb5d9b9f1efbd6ab67\": container with ID starting with 9cac494b5d794562bffe51989c4d93eb5c3be942bb72e3fb5d9b9f1efbd6ab67 not found: ID does not exist" Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.791382 4827 scope.go:117] "RemoveContainer" containerID="b83296d6e68fb918eb82d063f11aa1a1114139cd65028d430e257744a97afa6c" Jan 31 04:02:48 crc kubenswrapper[4827]: E0131 04:02:48.791778 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b83296d6e68fb918eb82d063f11aa1a1114139cd65028d430e257744a97afa6c\": container with ID starting with b83296d6e68fb918eb82d063f11aa1a1114139cd65028d430e257744a97afa6c not found: ID does not exist" containerID="b83296d6e68fb918eb82d063f11aa1a1114139cd65028d430e257744a97afa6c" Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.791811 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b83296d6e68fb918eb82d063f11aa1a1114139cd65028d430e257744a97afa6c"} err="failed to get container status \"b83296d6e68fb918eb82d063f11aa1a1114139cd65028d430e257744a97afa6c\": rpc error: code = NotFound desc = could not find container \"b83296d6e68fb918eb82d063f11aa1a1114139cd65028d430e257744a97afa6c\": container with ID starting with b83296d6e68fb918eb82d063f11aa1a1114139cd65028d430e257744a97afa6c not found: ID does not exist" Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.791834 4827 scope.go:117] "RemoveContainer" containerID="bbf179a2224fa51aa2361ca2bbed1397d9305e9f20659ef9a9c897490127cd90" Jan 31 04:02:48 crc kubenswrapper[4827]: E0131 04:02:48.792079 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbf179a2224fa51aa2361ca2bbed1397d9305e9f20659ef9a9c897490127cd90\": container with ID starting with bbf179a2224fa51aa2361ca2bbed1397d9305e9f20659ef9a9c897490127cd90 not found: ID does not exist" containerID="bbf179a2224fa51aa2361ca2bbed1397d9305e9f20659ef9a9c897490127cd90" Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.792103 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf179a2224fa51aa2361ca2bbed1397d9305e9f20659ef9a9c897490127cd90"} err="failed to get container status \"bbf179a2224fa51aa2361ca2bbed1397d9305e9f20659ef9a9c897490127cd90\": rpc error: code = NotFound desc = could not find container \"bbf179a2224fa51aa2361ca2bbed1397d9305e9f20659ef9a9c897490127cd90\": container with ID starting with bbf179a2224fa51aa2361ca2bbed1397d9305e9f20659ef9a9c897490127cd90 not found: ID does not exist" Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.792807 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdvnd\" (UniqueName: \"kubernetes.io/projected/0c61dfb4-4c69-4257-b957-09fde25ca424-kube-api-access-kdvnd\") on node \"crc\" DevicePath \"\"" Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.792834 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c61dfb4-4c69-4257-b957-09fde25ca424-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:02:48 crc kubenswrapper[4827]: I0131 04:02:48.792849 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c61dfb4-4c69-4257-b957-09fde25ca424-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:02:49 crc kubenswrapper[4827]: I0131 04:02:49.031146 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cs46v"] Jan 31 04:02:49 crc kubenswrapper[4827]: I0131 04:02:49.042464 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cs46v"] Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.111481 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.121446 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c61dfb4-4c69-4257-b957-09fde25ca424" path="/var/lib/kubelet/pods/0c61dfb4-4c69-4257-b957-09fde25ca424/volumes" Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.548064 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wcmjf"] Jan 31 04:02:50 crc kubenswrapper[4827]: E0131 04:02:50.548554 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c61dfb4-4c69-4257-b957-09fde25ca424" containerName="extract-content" Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.548596 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c61dfb4-4c69-4257-b957-09fde25ca424" containerName="extract-content" Jan 31 04:02:50 crc kubenswrapper[4827]: E0131 04:02:50.548635 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a" containerName="extract-content" Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.548651 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a" containerName="extract-content" Jan 31 04:02:50 crc kubenswrapper[4827]: E0131 04:02:50.548684 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a" containerName="registry-server" Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.548704 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a" containerName="registry-server" Jan 31 04:02:50 crc kubenswrapper[4827]: E0131 04:02:50.548737 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c61dfb4-4c69-4257-b957-09fde25ca424" containerName="registry-server" Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.548750 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c61dfb4-4c69-4257-b957-09fde25ca424" containerName="registry-server" Jan 31 04:02:50 crc kubenswrapper[4827]: E0131 04:02:50.548765 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c61dfb4-4c69-4257-b957-09fde25ca424" containerName="extract-utilities" Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.548777 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c61dfb4-4c69-4257-b957-09fde25ca424" containerName="extract-utilities" Jan 31 04:02:50 crc kubenswrapper[4827]: E0131 04:02:50.548799 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a" containerName="extract-utilities" Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.548812 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a" containerName="extract-utilities" Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.549138 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fca145a-27ee-4b4e-9cb0-ca1c6d2c2f3a" containerName="registry-server" Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.549176 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c61dfb4-4c69-4257-b957-09fde25ca424" containerName="registry-server" Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.552048 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wcmjf" Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.571960 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wcmjf"] Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.621763 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d67368-11d1-4b34-9d15-f634f594e67a-catalog-content\") pod \"community-operators-wcmjf\" (UID: \"b9d67368-11d1-4b34-9d15-f634f594e67a\") " pod="openshift-marketplace/community-operators-wcmjf" Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.621939 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d67368-11d1-4b34-9d15-f634f594e67a-utilities\") pod \"community-operators-wcmjf\" (UID: \"b9d67368-11d1-4b34-9d15-f634f594e67a\") " pod="openshift-marketplace/community-operators-wcmjf" Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.622041 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d5zl\" (UniqueName: \"kubernetes.io/projected/b9d67368-11d1-4b34-9d15-f634f594e67a-kube-api-access-2d5zl\") pod \"community-operators-wcmjf\" (UID: \"b9d67368-11d1-4b34-9d15-f634f594e67a\") " pod="openshift-marketplace/community-operators-wcmjf" Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.723106 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d67368-11d1-4b34-9d15-f634f594e67a-catalog-content\") pod \"community-operators-wcmjf\" (UID: \"b9d67368-11d1-4b34-9d15-f634f594e67a\") " pod="openshift-marketplace/community-operators-wcmjf" Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.723194 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d67368-11d1-4b34-9d15-f634f594e67a-utilities\") pod \"community-operators-wcmjf\" (UID: \"b9d67368-11d1-4b34-9d15-f634f594e67a\") " pod="openshift-marketplace/community-operators-wcmjf" Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.723256 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d5zl\" (UniqueName: \"kubernetes.io/projected/b9d67368-11d1-4b34-9d15-f634f594e67a-kube-api-access-2d5zl\") pod \"community-operators-wcmjf\" (UID: \"b9d67368-11d1-4b34-9d15-f634f594e67a\") " pod="openshift-marketplace/community-operators-wcmjf" Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.723730 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d67368-11d1-4b34-9d15-f634f594e67a-catalog-content\") pod \"community-operators-wcmjf\" (UID: \"b9d67368-11d1-4b34-9d15-f634f594e67a\") " pod="openshift-marketplace/community-operators-wcmjf" Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.723798 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d67368-11d1-4b34-9d15-f634f594e67a-utilities\") pod \"community-operators-wcmjf\" (UID: \"b9d67368-11d1-4b34-9d15-f634f594e67a\") " pod="openshift-marketplace/community-operators-wcmjf" Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.756915 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d5zl\" (UniqueName: \"kubernetes.io/projected/b9d67368-11d1-4b34-9d15-f634f594e67a-kube-api-access-2d5zl\") pod \"community-operators-wcmjf\" (UID: \"b9d67368-11d1-4b34-9d15-f634f594e67a\") " pod="openshift-marketplace/community-operators-wcmjf" Jan 31 04:02:50 crc kubenswrapper[4827]: I0131 04:02:50.875710 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wcmjf" Jan 31 04:02:51 crc kubenswrapper[4827]: I0131 04:02:51.366684 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wcmjf"] Jan 31 04:02:51 crc kubenswrapper[4827]: W0131 04:02:51.379419 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9d67368_11d1_4b34_9d15_f634f594e67a.slice/crio-f4ba0509d341d8b9227a03a97a77ae840628ba4ccafd232e87f2012e99b70349 WatchSource:0}: Error finding container f4ba0509d341d8b9227a03a97a77ae840628ba4ccafd232e87f2012e99b70349: Status 404 returned error can't find the container with id f4ba0509d341d8b9227a03a97a77ae840628ba4ccafd232e87f2012e99b70349 Jan 31 04:02:51 crc kubenswrapper[4827]: I0131 04:02:51.745731 4827 generic.go:334] "Generic (PLEG): container finished" podID="b9d67368-11d1-4b34-9d15-f634f594e67a" containerID="5d6f6867b9e8b1a8562a044429e0dcbcef53968f36301d42dd8dc6fd35cb6fd5" exitCode=0 Jan 31 04:02:51 crc kubenswrapper[4827]: I0131 04:02:51.746157 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcmjf" event={"ID":"b9d67368-11d1-4b34-9d15-f634f594e67a","Type":"ContainerDied","Data":"5d6f6867b9e8b1a8562a044429e0dcbcef53968f36301d42dd8dc6fd35cb6fd5"} Jan 31 04:02:51 crc kubenswrapper[4827]: I0131 04:02:51.746192 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcmjf" event={"ID":"b9d67368-11d1-4b34-9d15-f634f594e67a","Type":"ContainerStarted","Data":"f4ba0509d341d8b9227a03a97a77ae840628ba4ccafd232e87f2012e99b70349"} Jan 31 04:02:52 crc kubenswrapper[4827]: I0131 04:02:52.817617 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fr6qf" event={"ID":"0d53929a-c249-47fa-9d02-98021a8bcf2a","Type":"ContainerStarted","Data":"5932acf2ca88236e857575b7a33d76c46fc59ebacf2b1468787ab7453ef434c0"} Jan 31 04:02:52 crc kubenswrapper[4827]: I0131 04:02:52.818652 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fr6qf" Jan 31 04:02:52 crc kubenswrapper[4827]: I0131 04:02:52.846112 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fr6qf" podStartSLOduration=3.282491531 podStartE2EDuration="1m2.84609234s" podCreationTimestamp="2026-01-31 04:01:50 +0000 UTC" firstStartedPulling="2026-01-31 04:01:52.274092194 +0000 UTC m=+904.961172643" lastFinishedPulling="2026-01-31 04:02:51.837692993 +0000 UTC m=+964.524773452" observedRunningTime="2026-01-31 04:02:52.842556209 +0000 UTC m=+965.529636698" watchObservedRunningTime="2026-01-31 04:02:52.84609234 +0000 UTC m=+965.533172789" Jan 31 04:02:55 crc kubenswrapper[4827]: I0131 04:02:55.848589 4827 generic.go:334] "Generic (PLEG): container finished" podID="b9d67368-11d1-4b34-9d15-f634f594e67a" containerID="75df6c45bba2a65704fce0343e314ac95ac480d3efa16bc152379e1c21518f39" exitCode=0 Jan 31 04:02:55 crc kubenswrapper[4827]: I0131 04:02:55.848665 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcmjf" event={"ID":"b9d67368-11d1-4b34-9d15-f634f594e67a","Type":"ContainerDied","Data":"75df6c45bba2a65704fce0343e314ac95ac480d3efa16bc152379e1c21518f39"} Jan 31 04:02:55 crc kubenswrapper[4827]: I0131 04:02:55.850828 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zdjjp" event={"ID":"0d85c53f-5192-4621-86cc-d9403773713b","Type":"ContainerStarted","Data":"197bb7f4589e3f2b6d023e2cae3f25bb2b00ff529613f9cf235164cd161a317d"} Jan 31 04:02:55 crc kubenswrapper[4827]: I0131 04:02:55.900729 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zdjjp" podStartSLOduration=3.193164446 podStartE2EDuration="1m5.900699886s" podCreationTimestamp="2026-01-31 04:01:50 +0000 UTC" firstStartedPulling="2026-01-31 04:01:52.243765498 +0000 UTC m=+904.930845947" lastFinishedPulling="2026-01-31 04:02:54.951300928 +0000 UTC m=+967.638381387" observedRunningTime="2026-01-31 04:02:55.886481794 +0000 UTC m=+968.573562233" watchObservedRunningTime="2026-01-31 04:02:55.900699886 +0000 UTC m=+968.587780375" Jan 31 04:02:56 crc kubenswrapper[4827]: I0131 04:02:56.857955 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcmjf" event={"ID":"b9d67368-11d1-4b34-9d15-f634f594e67a","Type":"ContainerStarted","Data":"deff19caaca636b92a8f41904b2b54c8f3b054540415ce0b2ac37cf0534b1ae7"} Jan 31 04:02:56 crc kubenswrapper[4827]: I0131 04:02:56.880226 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wcmjf" podStartSLOduration=2.380397437 podStartE2EDuration="6.880212407s" podCreationTimestamp="2026-01-31 04:02:50 +0000 UTC" firstStartedPulling="2026-01-31 04:02:51.748647022 +0000 UTC m=+964.435727461" lastFinishedPulling="2026-01-31 04:02:56.248461982 +0000 UTC m=+968.935542431" observedRunningTime="2026-01-31 04:02:56.877802907 +0000 UTC m=+969.564883356" watchObservedRunningTime="2026-01-31 04:02:56.880212407 +0000 UTC m=+969.567292856" Jan 31 04:03:00 crc kubenswrapper[4827]: I0131 04:03:00.876568 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wcmjf" Jan 31 04:03:00 crc kubenswrapper[4827]: I0131 04:03:00.878054 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wcmjf" Jan 31 04:03:00 crc kubenswrapper[4827]: I0131 04:03:00.950500 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wcmjf" Jan 31 04:03:01 crc kubenswrapper[4827]: I0131 04:03:01.234941 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fr6qf" Jan 31 04:03:01 crc kubenswrapper[4827]: I0131 04:03:01.897297 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-m97nw" event={"ID":"5666901d-66a6-4282-b44c-c39a0721faa2","Type":"ContainerStarted","Data":"b34597496a43719a0e071589ddf378029921fd0b0152957ec5c40f9c21ecf827"} Jan 31 04:03:01 crc kubenswrapper[4827]: I0131 04:03:01.897843 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-m97nw" Jan 31 04:03:01 crc kubenswrapper[4827]: I0131 04:03:01.920009 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-m97nw" podStartSLOduration=2.624802174 podStartE2EDuration="1m11.919989827s" podCreationTimestamp="2026-01-31 04:01:50 +0000 UTC" firstStartedPulling="2026-01-31 04:01:52.248644369 +0000 UTC m=+904.935724818" lastFinishedPulling="2026-01-31 04:03:01.543832012 +0000 UTC m=+974.230912471" observedRunningTime="2026-01-31 04:03:01.915631683 +0000 UTC m=+974.602712172" watchObservedRunningTime="2026-01-31 04:03:01.919989827 +0000 UTC m=+974.607070286" Jan 31 04:03:10 crc kubenswrapper[4827]: I0131 04:03:10.913284 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wcmjf" Jan 31 04:03:10 crc kubenswrapper[4827]: I0131 04:03:10.957773 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wcmjf"] Jan 31 04:03:10 crc kubenswrapper[4827]: I0131 04:03:10.958038 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wcmjf" podUID="b9d67368-11d1-4b34-9d15-f634f594e67a" containerName="registry-server" containerID="cri-o://deff19caaca636b92a8f41904b2b54c8f3b054540415ce0b2ac37cf0534b1ae7" gracePeriod=2 Jan 31 04:03:11 crc kubenswrapper[4827]: I0131 04:03:11.359905 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-m97nw" Jan 31 04:03:11 crc kubenswrapper[4827]: I0131 04:03:11.406355 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wcmjf" Jan 31 04:03:11 crc kubenswrapper[4827]: I0131 04:03:11.520975 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d5zl\" (UniqueName: \"kubernetes.io/projected/b9d67368-11d1-4b34-9d15-f634f594e67a-kube-api-access-2d5zl\") pod \"b9d67368-11d1-4b34-9d15-f634f594e67a\" (UID: \"b9d67368-11d1-4b34-9d15-f634f594e67a\") " Jan 31 04:03:11 crc kubenswrapper[4827]: I0131 04:03:11.521096 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d67368-11d1-4b34-9d15-f634f594e67a-catalog-content\") pod \"b9d67368-11d1-4b34-9d15-f634f594e67a\" (UID: \"b9d67368-11d1-4b34-9d15-f634f594e67a\") " Jan 31 04:03:11 crc kubenswrapper[4827]: I0131 04:03:11.521156 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d67368-11d1-4b34-9d15-f634f594e67a-utilities\") pod \"b9d67368-11d1-4b34-9d15-f634f594e67a\" (UID: \"b9d67368-11d1-4b34-9d15-f634f594e67a\") " Jan 31 04:03:11 crc kubenswrapper[4827]: I0131 04:03:11.522039 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9d67368-11d1-4b34-9d15-f634f594e67a-utilities" (OuterVolumeSpecName: "utilities") pod "b9d67368-11d1-4b34-9d15-f634f594e67a" (UID: "b9d67368-11d1-4b34-9d15-f634f594e67a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:03:11 crc kubenswrapper[4827]: I0131 04:03:11.531171 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9d67368-11d1-4b34-9d15-f634f594e67a-kube-api-access-2d5zl" (OuterVolumeSpecName: "kube-api-access-2d5zl") pod "b9d67368-11d1-4b34-9d15-f634f594e67a" (UID: "b9d67368-11d1-4b34-9d15-f634f594e67a"). InnerVolumeSpecName "kube-api-access-2d5zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:03:11 crc kubenswrapper[4827]: I0131 04:03:11.571685 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9d67368-11d1-4b34-9d15-f634f594e67a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9d67368-11d1-4b34-9d15-f634f594e67a" (UID: "b9d67368-11d1-4b34-9d15-f634f594e67a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:03:11 crc kubenswrapper[4827]: I0131 04:03:11.623280 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d67368-11d1-4b34-9d15-f634f594e67a-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:03:11 crc kubenswrapper[4827]: I0131 04:03:11.623325 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d5zl\" (UniqueName: \"kubernetes.io/projected/b9d67368-11d1-4b34-9d15-f634f594e67a-kube-api-access-2d5zl\") on node \"crc\" DevicePath \"\"" Jan 31 04:03:11 crc kubenswrapper[4827]: I0131 04:03:11.623339 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d67368-11d1-4b34-9d15-f634f594e67a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:03:11 crc kubenswrapper[4827]: I0131 04:03:11.965008 4827 generic.go:334] "Generic (PLEG): container finished" podID="b9d67368-11d1-4b34-9d15-f634f594e67a" containerID="deff19caaca636b92a8f41904b2b54c8f3b054540415ce0b2ac37cf0534b1ae7" exitCode=0 Jan 31 04:03:11 crc kubenswrapper[4827]: I0131 04:03:11.965101 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcmjf" event={"ID":"b9d67368-11d1-4b34-9d15-f634f594e67a","Type":"ContainerDied","Data":"deff19caaca636b92a8f41904b2b54c8f3b054540415ce0b2ac37cf0534b1ae7"} Jan 31 04:03:11 crc kubenswrapper[4827]: I0131 04:03:11.965285 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcmjf" event={"ID":"b9d67368-11d1-4b34-9d15-f634f594e67a","Type":"ContainerDied","Data":"f4ba0509d341d8b9227a03a97a77ae840628ba4ccafd232e87f2012e99b70349"} Jan 31 04:03:11 crc kubenswrapper[4827]: I0131 04:03:11.965305 4827 scope.go:117] "RemoveContainer" containerID="deff19caaca636b92a8f41904b2b54c8f3b054540415ce0b2ac37cf0534b1ae7" Jan 31 04:03:11 crc kubenswrapper[4827]: I0131 04:03:11.965121 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wcmjf" Jan 31 04:03:11 crc kubenswrapper[4827]: I0131 04:03:11.982676 4827 scope.go:117] "RemoveContainer" containerID="75df6c45bba2a65704fce0343e314ac95ac480d3efa16bc152379e1c21518f39" Jan 31 04:03:11 crc kubenswrapper[4827]: I0131 04:03:11.997961 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wcmjf"] Jan 31 04:03:12 crc kubenswrapper[4827]: I0131 04:03:12.004514 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wcmjf"] Jan 31 04:03:12 crc kubenswrapper[4827]: I0131 04:03:12.012412 4827 scope.go:117] "RemoveContainer" containerID="5d6f6867b9e8b1a8562a044429e0dcbcef53968f36301d42dd8dc6fd35cb6fd5" Jan 31 04:03:12 crc kubenswrapper[4827]: I0131 04:03:12.041084 4827 scope.go:117] "RemoveContainer" containerID="deff19caaca636b92a8f41904b2b54c8f3b054540415ce0b2ac37cf0534b1ae7" Jan 31 04:03:12 crc kubenswrapper[4827]: E0131 04:03:12.041442 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deff19caaca636b92a8f41904b2b54c8f3b054540415ce0b2ac37cf0534b1ae7\": container with ID starting with deff19caaca636b92a8f41904b2b54c8f3b054540415ce0b2ac37cf0534b1ae7 not found: ID does not exist" containerID="deff19caaca636b92a8f41904b2b54c8f3b054540415ce0b2ac37cf0534b1ae7" Jan 31 04:03:12 crc kubenswrapper[4827]: I0131 04:03:12.041559 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deff19caaca636b92a8f41904b2b54c8f3b054540415ce0b2ac37cf0534b1ae7"} err="failed to get container status \"deff19caaca636b92a8f41904b2b54c8f3b054540415ce0b2ac37cf0534b1ae7\": rpc error: code = NotFound desc = could not find container \"deff19caaca636b92a8f41904b2b54c8f3b054540415ce0b2ac37cf0534b1ae7\": container with ID starting with deff19caaca636b92a8f41904b2b54c8f3b054540415ce0b2ac37cf0534b1ae7 not found: ID does not exist" Jan 31 04:03:12 crc kubenswrapper[4827]: I0131 04:03:12.041651 4827 scope.go:117] "RemoveContainer" containerID="75df6c45bba2a65704fce0343e314ac95ac480d3efa16bc152379e1c21518f39" Jan 31 04:03:12 crc kubenswrapper[4827]: E0131 04:03:12.041997 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75df6c45bba2a65704fce0343e314ac95ac480d3efa16bc152379e1c21518f39\": container with ID starting with 75df6c45bba2a65704fce0343e314ac95ac480d3efa16bc152379e1c21518f39 not found: ID does not exist" containerID="75df6c45bba2a65704fce0343e314ac95ac480d3efa16bc152379e1c21518f39" Jan 31 04:03:12 crc kubenswrapper[4827]: I0131 04:03:12.042031 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75df6c45bba2a65704fce0343e314ac95ac480d3efa16bc152379e1c21518f39"} err="failed to get container status \"75df6c45bba2a65704fce0343e314ac95ac480d3efa16bc152379e1c21518f39\": rpc error: code = NotFound desc = could not find container \"75df6c45bba2a65704fce0343e314ac95ac480d3efa16bc152379e1c21518f39\": container with ID starting with 75df6c45bba2a65704fce0343e314ac95ac480d3efa16bc152379e1c21518f39 not found: ID does not exist" Jan 31 04:03:12 crc kubenswrapper[4827]: I0131 04:03:12.042049 4827 scope.go:117] "RemoveContainer" containerID="5d6f6867b9e8b1a8562a044429e0dcbcef53968f36301d42dd8dc6fd35cb6fd5" Jan 31 04:03:12 crc kubenswrapper[4827]: E0131 04:03:12.042294 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d6f6867b9e8b1a8562a044429e0dcbcef53968f36301d42dd8dc6fd35cb6fd5\": container with ID starting with 5d6f6867b9e8b1a8562a044429e0dcbcef53968f36301d42dd8dc6fd35cb6fd5 not found: ID does not exist" containerID="5d6f6867b9e8b1a8562a044429e0dcbcef53968f36301d42dd8dc6fd35cb6fd5" Jan 31 04:03:12 crc kubenswrapper[4827]: I0131 04:03:12.042370 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d6f6867b9e8b1a8562a044429e0dcbcef53968f36301d42dd8dc6fd35cb6fd5"} err="failed to get container status \"5d6f6867b9e8b1a8562a044429e0dcbcef53968f36301d42dd8dc6fd35cb6fd5\": rpc error: code = NotFound desc = could not find container \"5d6f6867b9e8b1a8562a044429e0dcbcef53968f36301d42dd8dc6fd35cb6fd5\": container with ID starting with 5d6f6867b9e8b1a8562a044429e0dcbcef53968f36301d42dd8dc6fd35cb6fd5 not found: ID does not exist" Jan 31 04:03:12 crc kubenswrapper[4827]: I0131 04:03:12.120025 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9d67368-11d1-4b34-9d15-f634f594e67a" path="/var/lib/kubelet/pods/b9d67368-11d1-4b34-9d15-f634f594e67a/volumes" Jan 31 04:03:26 crc kubenswrapper[4827]: I0131 04:03:26.792774 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9cqp6"] Jan 31 04:03:26 crc kubenswrapper[4827]: E0131 04:03:26.793734 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9d67368-11d1-4b34-9d15-f634f594e67a" containerName="extract-content" Jan 31 04:03:26 crc kubenswrapper[4827]: I0131 04:03:26.793757 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9d67368-11d1-4b34-9d15-f634f594e67a" containerName="extract-content" Jan 31 04:03:26 crc kubenswrapper[4827]: E0131 04:03:26.793788 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9d67368-11d1-4b34-9d15-f634f594e67a" containerName="extract-utilities" Jan 31 04:03:26 crc kubenswrapper[4827]: I0131 04:03:26.793799 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9d67368-11d1-4b34-9d15-f634f594e67a" containerName="extract-utilities" Jan 31 04:03:26 crc kubenswrapper[4827]: E0131 04:03:26.793820 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9d67368-11d1-4b34-9d15-f634f594e67a" containerName="registry-server" Jan 31 04:03:26 crc kubenswrapper[4827]: I0131 04:03:26.793831 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9d67368-11d1-4b34-9d15-f634f594e67a" containerName="registry-server" Jan 31 04:03:26 crc kubenswrapper[4827]: I0131 04:03:26.795249 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9d67368-11d1-4b34-9d15-f634f594e67a" containerName="registry-server" Jan 31 04:03:26 crc kubenswrapper[4827]: I0131 04:03:26.796284 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9cqp6" Jan 31 04:03:26 crc kubenswrapper[4827]: I0131 04:03:26.800813 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-2ktz2" Jan 31 04:03:26 crc kubenswrapper[4827]: I0131 04:03:26.801117 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 31 04:03:26 crc kubenswrapper[4827]: I0131 04:03:26.801253 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 31 04:03:26 crc kubenswrapper[4827]: I0131 04:03:26.801404 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 31 04:03:26 crc kubenswrapper[4827]: I0131 04:03:26.814942 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9cqp6"] Jan 31 04:03:26 crc kubenswrapper[4827]: I0131 04:03:26.856525 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vzmg4"] Jan 31 04:03:26 crc kubenswrapper[4827]: I0131 04:03:26.858819 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vzmg4" Jan 31 04:03:26 crc kubenswrapper[4827]: I0131 04:03:26.861123 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 31 04:03:26 crc kubenswrapper[4827]: I0131 04:03:26.877198 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vzmg4"] Jan 31 04:03:26 crc kubenswrapper[4827]: I0131 04:03:26.949470 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1779baa3-6433-4593-97d7-01545244f27e-config\") pod \"dnsmasq-dns-675f4bcbfc-9cqp6\" (UID: \"1779baa3-6433-4593-97d7-01545244f27e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9cqp6" Jan 31 04:03:26 crc kubenswrapper[4827]: I0131 04:03:26.949535 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bbbda85-9e7a-49e1-910d-448fb9b798ac-config\") pod \"dnsmasq-dns-78dd6ddcc-vzmg4\" (UID: \"3bbbda85-9e7a-49e1-910d-448fb9b798ac\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vzmg4" Jan 31 04:03:26 crc kubenswrapper[4827]: I0131 04:03:26.949709 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbg57\" (UniqueName: \"kubernetes.io/projected/3bbbda85-9e7a-49e1-910d-448fb9b798ac-kube-api-access-fbg57\") pod \"dnsmasq-dns-78dd6ddcc-vzmg4\" (UID: \"3bbbda85-9e7a-49e1-910d-448fb9b798ac\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vzmg4" Jan 31 04:03:26 crc kubenswrapper[4827]: I0131 04:03:26.949899 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bbbda85-9e7a-49e1-910d-448fb9b798ac-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vzmg4\" (UID: \"3bbbda85-9e7a-49e1-910d-448fb9b798ac\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vzmg4" Jan 31 04:03:26 crc kubenswrapper[4827]: I0131 04:03:26.949930 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74cnz\" (UniqueName: \"kubernetes.io/projected/1779baa3-6433-4593-97d7-01545244f27e-kube-api-access-74cnz\") pod \"dnsmasq-dns-675f4bcbfc-9cqp6\" (UID: \"1779baa3-6433-4593-97d7-01545244f27e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9cqp6" Jan 31 04:03:27 crc kubenswrapper[4827]: I0131 04:03:27.051249 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbg57\" (UniqueName: \"kubernetes.io/projected/3bbbda85-9e7a-49e1-910d-448fb9b798ac-kube-api-access-fbg57\") pod \"dnsmasq-dns-78dd6ddcc-vzmg4\" (UID: \"3bbbda85-9e7a-49e1-910d-448fb9b798ac\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vzmg4" Jan 31 04:03:27 crc kubenswrapper[4827]: I0131 04:03:27.051317 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bbbda85-9e7a-49e1-910d-448fb9b798ac-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vzmg4\" (UID: \"3bbbda85-9e7a-49e1-910d-448fb9b798ac\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vzmg4" Jan 31 04:03:27 crc kubenswrapper[4827]: I0131 04:03:27.051341 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74cnz\" (UniqueName: \"kubernetes.io/projected/1779baa3-6433-4593-97d7-01545244f27e-kube-api-access-74cnz\") pod \"dnsmasq-dns-675f4bcbfc-9cqp6\" (UID: \"1779baa3-6433-4593-97d7-01545244f27e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9cqp6" Jan 31 04:03:27 crc kubenswrapper[4827]: I0131 04:03:27.051365 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1779baa3-6433-4593-97d7-01545244f27e-config\") pod \"dnsmasq-dns-675f4bcbfc-9cqp6\" (UID: \"1779baa3-6433-4593-97d7-01545244f27e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9cqp6" Jan 31 04:03:27 crc kubenswrapper[4827]: I0131 04:03:27.051401 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bbbda85-9e7a-49e1-910d-448fb9b798ac-config\") pod \"dnsmasq-dns-78dd6ddcc-vzmg4\" (UID: \"3bbbda85-9e7a-49e1-910d-448fb9b798ac\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vzmg4" Jan 31 04:03:27 crc kubenswrapper[4827]: I0131 04:03:27.052279 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bbbda85-9e7a-49e1-910d-448fb9b798ac-config\") pod \"dnsmasq-dns-78dd6ddcc-vzmg4\" (UID: \"3bbbda85-9e7a-49e1-910d-448fb9b798ac\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vzmg4" Jan 31 04:03:27 crc kubenswrapper[4827]: I0131 04:03:27.053063 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bbbda85-9e7a-49e1-910d-448fb9b798ac-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vzmg4\" (UID: \"3bbbda85-9e7a-49e1-910d-448fb9b798ac\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vzmg4" Jan 31 04:03:27 crc kubenswrapper[4827]: I0131 04:03:27.053794 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1779baa3-6433-4593-97d7-01545244f27e-config\") pod \"dnsmasq-dns-675f4bcbfc-9cqp6\" (UID: \"1779baa3-6433-4593-97d7-01545244f27e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9cqp6" Jan 31 04:03:27 crc kubenswrapper[4827]: I0131 04:03:27.073871 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74cnz\" (UniqueName: \"kubernetes.io/projected/1779baa3-6433-4593-97d7-01545244f27e-kube-api-access-74cnz\") pod \"dnsmasq-dns-675f4bcbfc-9cqp6\" (UID: \"1779baa3-6433-4593-97d7-01545244f27e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9cqp6" Jan 31 04:03:27 crc kubenswrapper[4827]: I0131 04:03:27.073926 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbg57\" (UniqueName: \"kubernetes.io/projected/3bbbda85-9e7a-49e1-910d-448fb9b798ac-kube-api-access-fbg57\") pod \"dnsmasq-dns-78dd6ddcc-vzmg4\" (UID: \"3bbbda85-9e7a-49e1-910d-448fb9b798ac\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vzmg4" Jan 31 04:03:27 crc kubenswrapper[4827]: I0131 04:03:27.116094 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9cqp6" Jan 31 04:03:27 crc kubenswrapper[4827]: I0131 04:03:27.173755 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vzmg4" Jan 31 04:03:27 crc kubenswrapper[4827]: I0131 04:03:27.516750 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9cqp6"] Jan 31 04:03:27 crc kubenswrapper[4827]: I0131 04:03:27.608926 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vzmg4"] Jan 31 04:03:27 crc kubenswrapper[4827]: W0131 04:03:27.613106 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bbbda85_9e7a_49e1_910d_448fb9b798ac.slice/crio-ecaeab5f3756f988285b0973d195a90720c01c76ae92d062cc15480a0b018a35 WatchSource:0}: Error finding container ecaeab5f3756f988285b0973d195a90720c01c76ae92d062cc15480a0b018a35: Status 404 returned error can't find the container with id ecaeab5f3756f988285b0973d195a90720c01c76ae92d062cc15480a0b018a35 Jan 31 04:03:28 crc kubenswrapper[4827]: I0131 04:03:28.100533 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-9cqp6" event={"ID":"1779baa3-6433-4593-97d7-01545244f27e","Type":"ContainerStarted","Data":"c7aa59253a640fdcc9e5db4ef13ae89a4b41e16777a24f1075812595a27abba4"} Jan 31 04:03:28 crc kubenswrapper[4827]: I0131 04:03:28.101987 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vzmg4" event={"ID":"3bbbda85-9e7a-49e1-910d-448fb9b798ac","Type":"ContainerStarted","Data":"ecaeab5f3756f988285b0973d195a90720c01c76ae92d062cc15480a0b018a35"} Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.484109 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9cqp6"] Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.512001 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7g76r"] Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.513107 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7g76r" Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.521299 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7g76r"] Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.584712 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67ad1b54-a5b6-4464-823e-4bb474694618-config\") pod \"dnsmasq-dns-666b6646f7-7g76r\" (UID: \"67ad1b54-a5b6-4464-823e-4bb474694618\") " pod="openstack/dnsmasq-dns-666b6646f7-7g76r" Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.584755 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67ad1b54-a5b6-4464-823e-4bb474694618-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7g76r\" (UID: \"67ad1b54-a5b6-4464-823e-4bb474694618\") " pod="openstack/dnsmasq-dns-666b6646f7-7g76r" Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.584791 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmwpl\" (UniqueName: \"kubernetes.io/projected/67ad1b54-a5b6-4464-823e-4bb474694618-kube-api-access-qmwpl\") pod \"dnsmasq-dns-666b6646f7-7g76r\" (UID: \"67ad1b54-a5b6-4464-823e-4bb474694618\") " pod="openstack/dnsmasq-dns-666b6646f7-7g76r" Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.688053 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67ad1b54-a5b6-4464-823e-4bb474694618-config\") pod \"dnsmasq-dns-666b6646f7-7g76r\" (UID: \"67ad1b54-a5b6-4464-823e-4bb474694618\") " pod="openstack/dnsmasq-dns-666b6646f7-7g76r" Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.688108 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67ad1b54-a5b6-4464-823e-4bb474694618-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7g76r\" (UID: \"67ad1b54-a5b6-4464-823e-4bb474694618\") " pod="openstack/dnsmasq-dns-666b6646f7-7g76r" Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.688214 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmwpl\" (UniqueName: \"kubernetes.io/projected/67ad1b54-a5b6-4464-823e-4bb474694618-kube-api-access-qmwpl\") pod \"dnsmasq-dns-666b6646f7-7g76r\" (UID: \"67ad1b54-a5b6-4464-823e-4bb474694618\") " pod="openstack/dnsmasq-dns-666b6646f7-7g76r" Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.689301 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67ad1b54-a5b6-4464-823e-4bb474694618-config\") pod \"dnsmasq-dns-666b6646f7-7g76r\" (UID: \"67ad1b54-a5b6-4464-823e-4bb474694618\") " pod="openstack/dnsmasq-dns-666b6646f7-7g76r" Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.689443 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67ad1b54-a5b6-4464-823e-4bb474694618-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7g76r\" (UID: \"67ad1b54-a5b6-4464-823e-4bb474694618\") " pod="openstack/dnsmasq-dns-666b6646f7-7g76r" Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.716207 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmwpl\" (UniqueName: \"kubernetes.io/projected/67ad1b54-a5b6-4464-823e-4bb474694618-kube-api-access-qmwpl\") pod \"dnsmasq-dns-666b6646f7-7g76r\" (UID: \"67ad1b54-a5b6-4464-823e-4bb474694618\") " pod="openstack/dnsmasq-dns-666b6646f7-7g76r" Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.782101 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vzmg4"] Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.803127 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bl8wf"] Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.804438 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.816354 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bl8wf"] Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.831922 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7g76r" Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.890656 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2rzd\" (UniqueName: \"kubernetes.io/projected/ac8573e8-0ab7-4f9b-a060-fcaf95d97e32-kube-api-access-h2rzd\") pod \"dnsmasq-dns-57d769cc4f-bl8wf\" (UID: \"ac8573e8-0ab7-4f9b-a060-fcaf95d97e32\") " pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.890729 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac8573e8-0ab7-4f9b-a060-fcaf95d97e32-config\") pod \"dnsmasq-dns-57d769cc4f-bl8wf\" (UID: \"ac8573e8-0ab7-4f9b-a060-fcaf95d97e32\") " pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.890796 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac8573e8-0ab7-4f9b-a060-fcaf95d97e32-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-bl8wf\" (UID: \"ac8573e8-0ab7-4f9b-a060-fcaf95d97e32\") " pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.992113 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac8573e8-0ab7-4f9b-a060-fcaf95d97e32-config\") pod \"dnsmasq-dns-57d769cc4f-bl8wf\" (UID: \"ac8573e8-0ab7-4f9b-a060-fcaf95d97e32\") " pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.992196 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac8573e8-0ab7-4f9b-a060-fcaf95d97e32-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-bl8wf\" (UID: \"ac8573e8-0ab7-4f9b-a060-fcaf95d97e32\") " pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.992226 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2rzd\" (UniqueName: \"kubernetes.io/projected/ac8573e8-0ab7-4f9b-a060-fcaf95d97e32-kube-api-access-h2rzd\") pod \"dnsmasq-dns-57d769cc4f-bl8wf\" (UID: \"ac8573e8-0ab7-4f9b-a060-fcaf95d97e32\") " pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.993289 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac8573e8-0ab7-4f9b-a060-fcaf95d97e32-config\") pod \"dnsmasq-dns-57d769cc4f-bl8wf\" (UID: \"ac8573e8-0ab7-4f9b-a060-fcaf95d97e32\") " pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" Jan 31 04:03:29 crc kubenswrapper[4827]: I0131 04:03:29.994905 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac8573e8-0ab7-4f9b-a060-fcaf95d97e32-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-bl8wf\" (UID: \"ac8573e8-0ab7-4f9b-a060-fcaf95d97e32\") " pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.010954 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2rzd\" (UniqueName: \"kubernetes.io/projected/ac8573e8-0ab7-4f9b-a060-fcaf95d97e32-kube-api-access-h2rzd\") pod \"dnsmasq-dns-57d769cc4f-bl8wf\" (UID: \"ac8573e8-0ab7-4f9b-a060-fcaf95d97e32\") " pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.125561 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.320345 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7g76r"] Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.422913 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bl8wf"] Jan 31 04:03:30 crc kubenswrapper[4827]: W0131 04:03:30.450010 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac8573e8_0ab7_4f9b_a060_fcaf95d97e32.slice/crio-a0c3580a2758bbc2e80d7b402ef4e1f80a293dbc02c7f0a85d5517e265de1e81 WatchSource:0}: Error finding container a0c3580a2758bbc2e80d7b402ef4e1f80a293dbc02c7f0a85d5517e265de1e81: Status 404 returned error can't find the container with id a0c3580a2758bbc2e80d7b402ef4e1f80a293dbc02c7f0a85d5517e265de1e81 Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.654859 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.659337 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.661593 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.662051 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.662066 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-l95g2" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.662379 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.662419 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.662711 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.663525 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.663759 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.818504 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.818943 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/237362ad-03ab-48a0-916d-1b140b4727d5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.818974 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/237362ad-03ab-48a0-916d-1b140b4727d5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.819007 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.819055 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.819123 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/237362ad-03ab-48a0-916d-1b140b4727d5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.819151 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/237362ad-03ab-48a0-916d-1b140b4727d5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.819186 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.819203 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/237362ad-03ab-48a0-916d-1b140b4727d5-config-data\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.819240 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.819256 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9fd8\" (UniqueName: \"kubernetes.io/projected/237362ad-03ab-48a0-916d-1b140b4727d5-kube-api-access-h9fd8\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.913548 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.914676 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.917600 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.917706 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.917955 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.917804 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.918214 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.918314 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.918417 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-thvnx" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.920037 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.920065 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9fd8\" (UniqueName: \"kubernetes.io/projected/237362ad-03ab-48a0-916d-1b140b4727d5-kube-api-access-h9fd8\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.920102 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.920129 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/237362ad-03ab-48a0-916d-1b140b4727d5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.920146 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/237362ad-03ab-48a0-916d-1b140b4727d5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.920168 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.920199 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.920221 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/237362ad-03ab-48a0-916d-1b140b4727d5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.920238 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/237362ad-03ab-48a0-916d-1b140b4727d5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.920268 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.920283 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/237362ad-03ab-48a0-916d-1b140b4727d5-config-data\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.921239 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/237362ad-03ab-48a0-916d-1b140b4727d5-config-data\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.921475 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.922637 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.924002 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/237362ad-03ab-48a0-916d-1b140b4727d5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.924190 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/237362ad-03ab-48a0-916d-1b140b4727d5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.927724 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.935950 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.935995 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/237362ad-03ab-48a0-916d-1b140b4727d5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.938062 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/237362ad-03ab-48a0-916d-1b140b4727d5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.941195 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.947735 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9fd8\" (UniqueName: \"kubernetes.io/projected/237362ad-03ab-48a0-916d-1b140b4727d5-kube-api-access-h9fd8\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.953502 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.956433 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " pod="openstack/rabbitmq-server-0" Jan 31 04:03:30 crc kubenswrapper[4827]: I0131 04:03:30.977457 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.021029 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02f954c7-6442-4974-827a-aef4a5690e8c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.021122 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02f954c7-6442-4974-827a-aef4a5690e8c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.021149 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02f954c7-6442-4974-827a-aef4a5690e8c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.021172 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02f954c7-6442-4974-827a-aef4a5690e8c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.021191 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.021274 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpjh7\" (UniqueName: \"kubernetes.io/projected/02f954c7-6442-4974-827a-aef4a5690e8c-kube-api-access-vpjh7\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.021364 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.021464 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.021525 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.021542 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.021582 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02f954c7-6442-4974-827a-aef4a5690e8c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.123291 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02f954c7-6442-4974-827a-aef4a5690e8c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.123358 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02f954c7-6442-4974-827a-aef4a5690e8c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.123391 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02f954c7-6442-4974-827a-aef4a5690e8c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.123410 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02f954c7-6442-4974-827a-aef4a5690e8c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.123429 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02f954c7-6442-4974-827a-aef4a5690e8c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.123446 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.123470 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpjh7\" (UniqueName: \"kubernetes.io/projected/02f954c7-6442-4974-827a-aef4a5690e8c-kube-api-access-vpjh7\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.123491 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.123523 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.123547 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.123560 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.123825 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.124099 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.124402 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02f954c7-6442-4974-827a-aef4a5690e8c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.124691 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02f954c7-6442-4974-827a-aef4a5690e8c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.124763 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.125904 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02f954c7-6442-4974-827a-aef4a5690e8c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.128214 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02f954c7-6442-4974-827a-aef4a5690e8c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.128504 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.129855 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.134235 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" event={"ID":"ac8573e8-0ab7-4f9b-a060-fcaf95d97e32","Type":"ContainerStarted","Data":"a0c3580a2758bbc2e80d7b402ef4e1f80a293dbc02c7f0a85d5517e265de1e81"} Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.137054 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02f954c7-6442-4974-827a-aef4a5690e8c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.138134 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7g76r" event={"ID":"67ad1b54-a5b6-4464-823e-4bb474694618","Type":"ContainerStarted","Data":"9f2a79e75159412eb5ba48439ae44886aa1e7289f58f2f35ae0181bb8eb8b910"} Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.140748 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.142929 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpjh7\" (UniqueName: \"kubernetes.io/projected/02f954c7-6442-4974-827a-aef4a5690e8c-kube-api-access-vpjh7\") pod \"rabbitmq-cell1-server-0\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:31 crc kubenswrapper[4827]: I0131 04:03:31.303512 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.188111 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.189654 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.195539 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-sbbmn" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.195796 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.195997 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.196206 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.200773 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.219477 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.241826 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f66333b7-3406-4a69-85f5-0806b992a625-kolla-config\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.242042 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v85sz\" (UniqueName: \"kubernetes.io/projected/f66333b7-3406-4a69-85f5-0806b992a625-kube-api-access-v85sz\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.242118 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66333b7-3406-4a69-85f5-0806b992a625-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.242188 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f66333b7-3406-4a69-85f5-0806b992a625-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.242284 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f66333b7-3406-4a69-85f5-0806b992a625-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.242354 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66333b7-3406-4a69-85f5-0806b992a625-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.242581 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f66333b7-3406-4a69-85f5-0806b992a625-config-data-default\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.242625 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.344001 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f66333b7-3406-4a69-85f5-0806b992a625-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.344073 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66333b7-3406-4a69-85f5-0806b992a625-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.344105 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f66333b7-3406-4a69-85f5-0806b992a625-config-data-default\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.344136 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.344169 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f66333b7-3406-4a69-85f5-0806b992a625-kolla-config\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.344229 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v85sz\" (UniqueName: \"kubernetes.io/projected/f66333b7-3406-4a69-85f5-0806b992a625-kube-api-access-v85sz\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.344257 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66333b7-3406-4a69-85f5-0806b992a625-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.344289 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f66333b7-3406-4a69-85f5-0806b992a625-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.345342 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f66333b7-3406-4a69-85f5-0806b992a625-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.345551 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f66333b7-3406-4a69-85f5-0806b992a625-kolla-config\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.345834 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f66333b7-3406-4a69-85f5-0806b992a625-config-data-default\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.345743 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.346735 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f66333b7-3406-4a69-85f5-0806b992a625-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.352283 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66333b7-3406-4a69-85f5-0806b992a625-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.353785 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66333b7-3406-4a69-85f5-0806b992a625-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.365675 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v85sz\" (UniqueName: \"kubernetes.io/projected/f66333b7-3406-4a69-85f5-0806b992a625-kube-api-access-v85sz\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.377859 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"f66333b7-3406-4a69-85f5-0806b992a625\") " pod="openstack/openstack-galera-0" Jan 31 04:03:32 crc kubenswrapper[4827]: I0131 04:03:32.520366 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.558997 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.561305 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.566446 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.567365 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-nn2bc" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.567566 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.567921 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.579484 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.692151 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d3d60f-16f2-469d-8314-9055bb91a9ce-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.692434 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a0d3d60f-16f2-469d-8314-9055bb91a9ce-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.692467 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a0d3d60f-16f2-469d-8314-9055bb91a9ce-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.692487 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0d3d60f-16f2-469d-8314-9055bb91a9ce-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.692504 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a0d3d60f-16f2-469d-8314-9055bb91a9ce-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.692672 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.692709 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc5d8\" (UniqueName: \"kubernetes.io/projected/a0d3d60f-16f2-469d-8314-9055bb91a9ce-kube-api-access-rc5d8\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.692793 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d3d60f-16f2-469d-8314-9055bb91a9ce-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.793699 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a0d3d60f-16f2-469d-8314-9055bb91a9ce-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.793744 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0d3d60f-16f2-469d-8314-9055bb91a9ce-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.793764 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a0d3d60f-16f2-469d-8314-9055bb91a9ce-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.793812 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.793828 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc5d8\" (UniqueName: \"kubernetes.io/projected/a0d3d60f-16f2-469d-8314-9055bb91a9ce-kube-api-access-rc5d8\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.793866 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d3d60f-16f2-469d-8314-9055bb91a9ce-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.793914 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d3d60f-16f2-469d-8314-9055bb91a9ce-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.793959 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a0d3d60f-16f2-469d-8314-9055bb91a9ce-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.794001 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.794207 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a0d3d60f-16f2-469d-8314-9055bb91a9ce-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.794493 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a0d3d60f-16f2-469d-8314-9055bb91a9ce-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.795339 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a0d3d60f-16f2-469d-8314-9055bb91a9ce-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.795987 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0d3d60f-16f2-469d-8314-9055bb91a9ce-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.806699 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d3d60f-16f2-469d-8314-9055bb91a9ce-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.806742 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d3d60f-16f2-469d-8314-9055bb91a9ce-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.813610 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc5d8\" (UniqueName: \"kubernetes.io/projected/a0d3d60f-16f2-469d-8314-9055bb91a9ce-kube-api-access-rc5d8\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.816601 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a0d3d60f-16f2-469d-8314-9055bb91a9ce\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.857003 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.857851 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.859830 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.860226 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-444pn" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.860498 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.873317 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.881544 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.997959 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/220c4c53-ac13-4f85-88da-38fef6ce70b1-kolla-config\") pod \"memcached-0\" (UID: \"220c4c53-ac13-4f85-88da-38fef6ce70b1\") " pod="openstack/memcached-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.998015 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/220c4c53-ac13-4f85-88da-38fef6ce70b1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"220c4c53-ac13-4f85-88da-38fef6ce70b1\") " pod="openstack/memcached-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.998036 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220c4c53-ac13-4f85-88da-38fef6ce70b1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"220c4c53-ac13-4f85-88da-38fef6ce70b1\") " pod="openstack/memcached-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.998061 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/220c4c53-ac13-4f85-88da-38fef6ce70b1-config-data\") pod \"memcached-0\" (UID: \"220c4c53-ac13-4f85-88da-38fef6ce70b1\") " pod="openstack/memcached-0" Jan 31 04:03:33 crc kubenswrapper[4827]: I0131 04:03:33.998115 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m8zq\" (UniqueName: \"kubernetes.io/projected/220c4c53-ac13-4f85-88da-38fef6ce70b1-kube-api-access-8m8zq\") pod \"memcached-0\" (UID: \"220c4c53-ac13-4f85-88da-38fef6ce70b1\") " pod="openstack/memcached-0" Jan 31 04:03:34 crc kubenswrapper[4827]: I0131 04:03:34.099693 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/220c4c53-ac13-4f85-88da-38fef6ce70b1-kolla-config\") pod \"memcached-0\" (UID: \"220c4c53-ac13-4f85-88da-38fef6ce70b1\") " pod="openstack/memcached-0" Jan 31 04:03:34 crc kubenswrapper[4827]: I0131 04:03:34.099756 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/220c4c53-ac13-4f85-88da-38fef6ce70b1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"220c4c53-ac13-4f85-88da-38fef6ce70b1\") " pod="openstack/memcached-0" Jan 31 04:03:34 crc kubenswrapper[4827]: I0131 04:03:34.099784 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220c4c53-ac13-4f85-88da-38fef6ce70b1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"220c4c53-ac13-4f85-88da-38fef6ce70b1\") " pod="openstack/memcached-0" Jan 31 04:03:34 crc kubenswrapper[4827]: I0131 04:03:34.099815 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/220c4c53-ac13-4f85-88da-38fef6ce70b1-config-data\") pod \"memcached-0\" (UID: \"220c4c53-ac13-4f85-88da-38fef6ce70b1\") " pod="openstack/memcached-0" Jan 31 04:03:34 crc kubenswrapper[4827]: I0131 04:03:34.099905 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m8zq\" (UniqueName: \"kubernetes.io/projected/220c4c53-ac13-4f85-88da-38fef6ce70b1-kube-api-access-8m8zq\") pod \"memcached-0\" (UID: \"220c4c53-ac13-4f85-88da-38fef6ce70b1\") " pod="openstack/memcached-0" Jan 31 04:03:34 crc kubenswrapper[4827]: I0131 04:03:34.100663 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/220c4c53-ac13-4f85-88da-38fef6ce70b1-kolla-config\") pod \"memcached-0\" (UID: \"220c4c53-ac13-4f85-88da-38fef6ce70b1\") " pod="openstack/memcached-0" Jan 31 04:03:34 crc kubenswrapper[4827]: I0131 04:03:34.100773 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/220c4c53-ac13-4f85-88da-38fef6ce70b1-config-data\") pod \"memcached-0\" (UID: \"220c4c53-ac13-4f85-88da-38fef6ce70b1\") " pod="openstack/memcached-0" Jan 31 04:03:34 crc kubenswrapper[4827]: I0131 04:03:34.103555 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/220c4c53-ac13-4f85-88da-38fef6ce70b1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"220c4c53-ac13-4f85-88da-38fef6ce70b1\") " pod="openstack/memcached-0" Jan 31 04:03:34 crc kubenswrapper[4827]: I0131 04:03:34.104847 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220c4c53-ac13-4f85-88da-38fef6ce70b1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"220c4c53-ac13-4f85-88da-38fef6ce70b1\") " pod="openstack/memcached-0" Jan 31 04:03:34 crc kubenswrapper[4827]: I0131 04:03:34.122186 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m8zq\" (UniqueName: \"kubernetes.io/projected/220c4c53-ac13-4f85-88da-38fef6ce70b1-kube-api-access-8m8zq\") pod \"memcached-0\" (UID: \"220c4c53-ac13-4f85-88da-38fef6ce70b1\") " pod="openstack/memcached-0" Jan 31 04:03:34 crc kubenswrapper[4827]: I0131 04:03:34.197641 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 31 04:03:35 crc kubenswrapper[4827]: I0131 04:03:35.995463 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 04:03:35 crc kubenswrapper[4827]: I0131 04:03:35.996689 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 04:03:36 crc kubenswrapper[4827]: I0131 04:03:36.000945 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-nffb8" Jan 31 04:03:36 crc kubenswrapper[4827]: I0131 04:03:36.018792 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 04:03:36 crc kubenswrapper[4827]: I0131 04:03:36.135557 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl7gv\" (UniqueName: \"kubernetes.io/projected/46ae0722-07b9-42d3-9ca9-d6a07cd0aa12-kube-api-access-pl7gv\") pod \"kube-state-metrics-0\" (UID: \"46ae0722-07b9-42d3-9ca9-d6a07cd0aa12\") " pod="openstack/kube-state-metrics-0" Jan 31 04:03:36 crc kubenswrapper[4827]: I0131 04:03:36.237133 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl7gv\" (UniqueName: \"kubernetes.io/projected/46ae0722-07b9-42d3-9ca9-d6a07cd0aa12-kube-api-access-pl7gv\") pod \"kube-state-metrics-0\" (UID: \"46ae0722-07b9-42d3-9ca9-d6a07cd0aa12\") " pod="openstack/kube-state-metrics-0" Jan 31 04:03:36 crc kubenswrapper[4827]: I0131 04:03:36.270671 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl7gv\" (UniqueName: \"kubernetes.io/projected/46ae0722-07b9-42d3-9ca9-d6a07cd0aa12-kube-api-access-pl7gv\") pod \"kube-state-metrics-0\" (UID: \"46ae0722-07b9-42d3-9ca9-d6a07cd0aa12\") " pod="openstack/kube-state-metrics-0" Jan 31 04:03:36 crc kubenswrapper[4827]: I0131 04:03:36.329869 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.047376 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jrrb4"] Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.048487 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.051498 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.052075 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.052175 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-qck9c" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.064035 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jrrb4"] Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.115892 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-vhmf9"] Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.117240 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.184180 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-vhmf9"] Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.203859 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/35c80c3a-29fe-4992-a421-f5ce7704ff53-var-lib\") pod \"ovn-controller-ovs-vhmf9\" (UID: \"35c80c3a-29fe-4992-a421-f5ce7704ff53\") " pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.204187 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/35c80c3a-29fe-4992-a421-f5ce7704ff53-var-log\") pod \"ovn-controller-ovs-vhmf9\" (UID: \"35c80c3a-29fe-4992-a421-f5ce7704ff53\") " pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.204305 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0a1bcac-47e2-4089-ae1e-98a2dc41d270-var-run-ovn\") pod \"ovn-controller-jrrb4\" (UID: \"b0a1bcac-47e2-4089-ae1e-98a2dc41d270\") " pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.204407 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b0a1bcac-47e2-4089-ae1e-98a2dc41d270-var-run\") pod \"ovn-controller-jrrb4\" (UID: \"b0a1bcac-47e2-4089-ae1e-98a2dc41d270\") " pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.204536 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0a1bcac-47e2-4089-ae1e-98a2dc41d270-ovn-controller-tls-certs\") pod \"ovn-controller-jrrb4\" (UID: \"b0a1bcac-47e2-4089-ae1e-98a2dc41d270\") " pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.204634 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0a1bcac-47e2-4089-ae1e-98a2dc41d270-scripts\") pod \"ovn-controller-jrrb4\" (UID: \"b0a1bcac-47e2-4089-ae1e-98a2dc41d270\") " pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.204745 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6gpj\" (UniqueName: \"kubernetes.io/projected/b0a1bcac-47e2-4089-ae1e-98a2dc41d270-kube-api-access-r6gpj\") pod \"ovn-controller-jrrb4\" (UID: \"b0a1bcac-47e2-4089-ae1e-98a2dc41d270\") " pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.204841 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/35c80c3a-29fe-4992-a421-f5ce7704ff53-etc-ovs\") pod \"ovn-controller-ovs-vhmf9\" (UID: \"35c80c3a-29fe-4992-a421-f5ce7704ff53\") " pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.204983 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35c80c3a-29fe-4992-a421-f5ce7704ff53-scripts\") pod \"ovn-controller-ovs-vhmf9\" (UID: \"35c80c3a-29fe-4992-a421-f5ce7704ff53\") " pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.205091 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b0a1bcac-47e2-4089-ae1e-98a2dc41d270-var-log-ovn\") pod \"ovn-controller-jrrb4\" (UID: \"b0a1bcac-47e2-4089-ae1e-98a2dc41d270\") " pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.205188 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lv8k\" (UniqueName: \"kubernetes.io/projected/35c80c3a-29fe-4992-a421-f5ce7704ff53-kube-api-access-5lv8k\") pod \"ovn-controller-ovs-vhmf9\" (UID: \"35c80c3a-29fe-4992-a421-f5ce7704ff53\") " pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.205288 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35c80c3a-29fe-4992-a421-f5ce7704ff53-var-run\") pod \"ovn-controller-ovs-vhmf9\" (UID: \"35c80c3a-29fe-4992-a421-f5ce7704ff53\") " pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.205396 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a1bcac-47e2-4089-ae1e-98a2dc41d270-combined-ca-bundle\") pod \"ovn-controller-jrrb4\" (UID: \"b0a1bcac-47e2-4089-ae1e-98a2dc41d270\") " pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.307235 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0a1bcac-47e2-4089-ae1e-98a2dc41d270-ovn-controller-tls-certs\") pod \"ovn-controller-jrrb4\" (UID: \"b0a1bcac-47e2-4089-ae1e-98a2dc41d270\") " pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.308477 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0a1bcac-47e2-4089-ae1e-98a2dc41d270-scripts\") pod \"ovn-controller-jrrb4\" (UID: \"b0a1bcac-47e2-4089-ae1e-98a2dc41d270\") " pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.308635 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6gpj\" (UniqueName: \"kubernetes.io/projected/b0a1bcac-47e2-4089-ae1e-98a2dc41d270-kube-api-access-r6gpj\") pod \"ovn-controller-jrrb4\" (UID: \"b0a1bcac-47e2-4089-ae1e-98a2dc41d270\") " pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.308743 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/35c80c3a-29fe-4992-a421-f5ce7704ff53-etc-ovs\") pod \"ovn-controller-ovs-vhmf9\" (UID: \"35c80c3a-29fe-4992-a421-f5ce7704ff53\") " pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.308848 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35c80c3a-29fe-4992-a421-f5ce7704ff53-scripts\") pod \"ovn-controller-ovs-vhmf9\" (UID: \"35c80c3a-29fe-4992-a421-f5ce7704ff53\") " pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.309031 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b0a1bcac-47e2-4089-ae1e-98a2dc41d270-var-log-ovn\") pod \"ovn-controller-jrrb4\" (UID: \"b0a1bcac-47e2-4089-ae1e-98a2dc41d270\") " pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.309126 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lv8k\" (UniqueName: \"kubernetes.io/projected/35c80c3a-29fe-4992-a421-f5ce7704ff53-kube-api-access-5lv8k\") pod \"ovn-controller-ovs-vhmf9\" (UID: \"35c80c3a-29fe-4992-a421-f5ce7704ff53\") " pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.309237 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35c80c3a-29fe-4992-a421-f5ce7704ff53-var-run\") pod \"ovn-controller-ovs-vhmf9\" (UID: \"35c80c3a-29fe-4992-a421-f5ce7704ff53\") " pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.309327 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a1bcac-47e2-4089-ae1e-98a2dc41d270-combined-ca-bundle\") pod \"ovn-controller-jrrb4\" (UID: \"b0a1bcac-47e2-4089-ae1e-98a2dc41d270\") " pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.309460 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/35c80c3a-29fe-4992-a421-f5ce7704ff53-var-lib\") pod \"ovn-controller-ovs-vhmf9\" (UID: \"35c80c3a-29fe-4992-a421-f5ce7704ff53\") " pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.309560 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/35c80c3a-29fe-4992-a421-f5ce7704ff53-var-log\") pod \"ovn-controller-ovs-vhmf9\" (UID: \"35c80c3a-29fe-4992-a421-f5ce7704ff53\") " pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.309657 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0a1bcac-47e2-4089-ae1e-98a2dc41d270-var-run-ovn\") pod \"ovn-controller-jrrb4\" (UID: \"b0a1bcac-47e2-4089-ae1e-98a2dc41d270\") " pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.309755 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b0a1bcac-47e2-4089-ae1e-98a2dc41d270-var-run\") pod \"ovn-controller-jrrb4\" (UID: \"b0a1bcac-47e2-4089-ae1e-98a2dc41d270\") " pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.310149 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b0a1bcac-47e2-4089-ae1e-98a2dc41d270-var-run\") pod \"ovn-controller-jrrb4\" (UID: \"b0a1bcac-47e2-4089-ae1e-98a2dc41d270\") " pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.309479 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/35c80c3a-29fe-4992-a421-f5ce7704ff53-etc-ovs\") pod \"ovn-controller-ovs-vhmf9\" (UID: \"35c80c3a-29fe-4992-a421-f5ce7704ff53\") " pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.312091 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b0a1bcac-47e2-4089-ae1e-98a2dc41d270-var-log-ovn\") pod \"ovn-controller-jrrb4\" (UID: \"b0a1bcac-47e2-4089-ae1e-98a2dc41d270\") " pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.312173 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35c80c3a-29fe-4992-a421-f5ce7704ff53-var-run\") pod \"ovn-controller-ovs-vhmf9\" (UID: \"35c80c3a-29fe-4992-a421-f5ce7704ff53\") " pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.312217 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/35c80c3a-29fe-4992-a421-f5ce7704ff53-var-lib\") pod \"ovn-controller-ovs-vhmf9\" (UID: \"35c80c3a-29fe-4992-a421-f5ce7704ff53\") " pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.312254 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/35c80c3a-29fe-4992-a421-f5ce7704ff53-var-log\") pod \"ovn-controller-ovs-vhmf9\" (UID: \"35c80c3a-29fe-4992-a421-f5ce7704ff53\") " pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.312302 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0a1bcac-47e2-4089-ae1e-98a2dc41d270-var-run-ovn\") pod \"ovn-controller-jrrb4\" (UID: \"b0a1bcac-47e2-4089-ae1e-98a2dc41d270\") " pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.312384 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0a1bcac-47e2-4089-ae1e-98a2dc41d270-scripts\") pod \"ovn-controller-jrrb4\" (UID: \"b0a1bcac-47e2-4089-ae1e-98a2dc41d270\") " pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.314709 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a1bcac-47e2-4089-ae1e-98a2dc41d270-combined-ca-bundle\") pod \"ovn-controller-jrrb4\" (UID: \"b0a1bcac-47e2-4089-ae1e-98a2dc41d270\") " pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.316795 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0a1bcac-47e2-4089-ae1e-98a2dc41d270-ovn-controller-tls-certs\") pod \"ovn-controller-jrrb4\" (UID: \"b0a1bcac-47e2-4089-ae1e-98a2dc41d270\") " pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.318227 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35c80c3a-29fe-4992-a421-f5ce7704ff53-scripts\") pod \"ovn-controller-ovs-vhmf9\" (UID: \"35c80c3a-29fe-4992-a421-f5ce7704ff53\") " pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.333059 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6gpj\" (UniqueName: \"kubernetes.io/projected/b0a1bcac-47e2-4089-ae1e-98a2dc41d270-kube-api-access-r6gpj\") pod \"ovn-controller-jrrb4\" (UID: \"b0a1bcac-47e2-4089-ae1e-98a2dc41d270\") " pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.334170 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lv8k\" (UniqueName: \"kubernetes.io/projected/35c80c3a-29fe-4992-a421-f5ce7704ff53-kube-api-access-5lv8k\") pod \"ovn-controller-ovs-vhmf9\" (UID: \"35c80c3a-29fe-4992-a421-f5ce7704ff53\") " pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.413899 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.432089 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.920869 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.922045 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.923904 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.924067 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.924113 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.924147 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.924850 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-b4gxr" Jan 31 04:03:39 crc kubenswrapper[4827]: I0131 04:03:39.930713 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.022408 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5961815-808d-4f79-867c-763e2946d47f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.022450 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5961815-808d-4f79-867c-763e2946d47f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.022504 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5961815-808d-4f79-867c-763e2946d47f-config\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.022653 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.022696 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6v64\" (UniqueName: \"kubernetes.io/projected/a5961815-808d-4f79-867c-763e2946d47f-kube-api-access-n6v64\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.022764 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5961815-808d-4f79-867c-763e2946d47f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.022783 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5961815-808d-4f79-867c-763e2946d47f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.022971 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a5961815-808d-4f79-867c-763e2946d47f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.124587 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5961815-808d-4f79-867c-763e2946d47f-config\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.124635 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.124660 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6v64\" (UniqueName: \"kubernetes.io/projected/a5961815-808d-4f79-867c-763e2946d47f-kube-api-access-n6v64\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.124716 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5961815-808d-4f79-867c-763e2946d47f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.124744 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5961815-808d-4f79-867c-763e2946d47f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.124841 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a5961815-808d-4f79-867c-763e2946d47f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.125016 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.125490 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5961815-808d-4f79-867c-763e2946d47f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.125490 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a5961815-808d-4f79-867c-763e2946d47f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.125528 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5961815-808d-4f79-867c-763e2946d47f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.125624 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5961815-808d-4f79-867c-763e2946d47f-config\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.126116 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5961815-808d-4f79-867c-763e2946d47f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.129755 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5961815-808d-4f79-867c-763e2946d47f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.129969 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5961815-808d-4f79-867c-763e2946d47f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.131275 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5961815-808d-4f79-867c-763e2946d47f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.144636 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6v64\" (UniqueName: \"kubernetes.io/projected/a5961815-808d-4f79-867c-763e2946d47f-kube-api-access-n6v64\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.146729 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a5961815-808d-4f79-867c-763e2946d47f\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:40 crc kubenswrapper[4827]: I0131 04:03:40.250741 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:42 crc kubenswrapper[4827]: E0131 04:03:42.375103 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 04:03:42 crc kubenswrapper[4827]: E0131 04:03:42.375708 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbg57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-vzmg4_openstack(3bbbda85-9e7a-49e1-910d-448fb9b798ac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:03:42 crc kubenswrapper[4827]: E0131 04:03:42.376923 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-vzmg4" podUID="3bbbda85-9e7a-49e1-910d-448fb9b798ac" Jan 31 04:03:42 crc kubenswrapper[4827]: E0131 04:03:42.544694 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 04:03:42 crc kubenswrapper[4827]: E0131 04:03:42.544842 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-74cnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-9cqp6_openstack(1779baa3-6433-4593-97d7-01545244f27e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:03:42 crc kubenswrapper[4827]: E0131 04:03:42.546136 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-9cqp6" podUID="1779baa3-6433-4593-97d7-01545244f27e" Jan 31 04:03:42 crc kubenswrapper[4827]: I0131 04:03:42.768903 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 04:03:42 crc kubenswrapper[4827]: I0131 04:03:42.935010 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 04:03:42 crc kubenswrapper[4827]: I0131 04:03:42.940185 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 31 04:03:42 crc kubenswrapper[4827]: I0131 04:03:42.946968 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 31 04:03:42 crc kubenswrapper[4827]: I0131 04:03:42.957039 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 04:03:42 crc kubenswrapper[4827]: I0131 04:03:42.958414 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:42 crc kubenswrapper[4827]: I0131 04:03:42.960287 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-nw8d2" Jan 31 04:03:42 crc kubenswrapper[4827]: I0131 04:03:42.960527 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 31 04:03:42 crc kubenswrapper[4827]: I0131 04:03:42.960586 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 31 04:03:42 crc kubenswrapper[4827]: I0131 04:03:42.960857 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 31 04:03:42 crc kubenswrapper[4827]: I0131 04:03:42.963671 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.076111 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6f952c-d09b-4584-b231-3fb87e5622fd-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.076195 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f6f952c-d09b-4584-b231-3fb87e5622fd-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.076230 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1f6f952c-d09b-4584-b231-3fb87e5622fd-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.076403 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.076522 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj8z9\" (UniqueName: \"kubernetes.io/projected/1f6f952c-d09b-4584-b231-3fb87e5622fd-kube-api-access-vj8z9\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.076591 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f6f952c-d09b-4584-b231-3fb87e5622fd-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.076622 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f6f952c-d09b-4584-b231-3fb87e5622fd-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.076669 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f6f952c-d09b-4584-b231-3fb87e5622fd-config\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.167002 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jrrb4"] Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.179462 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6f952c-d09b-4584-b231-3fb87e5622fd-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.179554 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f6f952c-d09b-4584-b231-3fb87e5622fd-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.179590 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1f6f952c-d09b-4584-b231-3fb87e5622fd-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.179638 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.179704 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj8z9\" (UniqueName: \"kubernetes.io/projected/1f6f952c-d09b-4584-b231-3fb87e5622fd-kube-api-access-vj8z9\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.179761 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f6f952c-d09b-4584-b231-3fb87e5622fd-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.179786 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f6f952c-d09b-4584-b231-3fb87e5622fd-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.179814 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f6f952c-d09b-4584-b231-3fb87e5622fd-config\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.181188 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f6f952c-d09b-4584-b231-3fb87e5622fd-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.181405 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1f6f952c-d09b-4584-b231-3fb87e5622fd-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.182421 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.183782 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f6f952c-d09b-4584-b231-3fb87e5622fd-config\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.187122 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.188497 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6f952c-d09b-4584-b231-3fb87e5622fd-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.189001 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f6f952c-d09b-4584-b231-3fb87e5622fd-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.189283 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f6f952c-d09b-4584-b231-3fb87e5622fd-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.196875 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj8z9\" (UniqueName: \"kubernetes.io/projected/1f6f952c-d09b-4584-b231-3fb87e5622fd-kube-api-access-vj8z9\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.208565 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1f6f952c-d09b-4584-b231-3fb87e5622fd\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.224926 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"220c4c53-ac13-4f85-88da-38fef6ce70b1","Type":"ContainerStarted","Data":"5f47a35a9ef683f9260c86a9acc5f9efea2d946a3c6be6bc8b872f68332df83b"} Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.226069 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f66333b7-3406-4a69-85f5-0806b992a625","Type":"ContainerStarted","Data":"69da081d9e1e34c3960286217bec46df134d76e3c392baf65dce0fc842926173"} Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.227571 4827 generic.go:334] "Generic (PLEG): container finished" podID="ac8573e8-0ab7-4f9b-a060-fcaf95d97e32" containerID="cd0c4916f00769c88e9beaa491a72bf77377db2726f3e3e6b910b4e9bbb86bb1" exitCode=0 Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.227637 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" event={"ID":"ac8573e8-0ab7-4f9b-a060-fcaf95d97e32","Type":"ContainerDied","Data":"cd0c4916f00769c88e9beaa491a72bf77377db2726f3e3e6b910b4e9bbb86bb1"} Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.228908 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jrrb4" event={"ID":"b0a1bcac-47e2-4089-ae1e-98a2dc41d270","Type":"ContainerStarted","Data":"74ff16f190e3da86c28e3fc9e4cdf305c16e4ceec581ee1a2cb0663b8b312a04"} Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.230168 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a0d3d60f-16f2-469d-8314-9055bb91a9ce","Type":"ContainerStarted","Data":"58b903f9ce840743b4160c91e5f410d9cc0e2bb0127f6498f91ee1f62b88917a"} Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.233012 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"237362ad-03ab-48a0-916d-1b140b4727d5","Type":"ContainerStarted","Data":"6f91a72be3f7fbaf26c7ea0f9d9478c31e909b7fb766cad9cc4b9943489f700e"} Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.234341 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"46ae0722-07b9-42d3-9ca9-d6a07cd0aa12","Type":"ContainerStarted","Data":"d37ebb304da853edf5de57595a7c6030fe346d3e4c07e79f492ae147599fb1c1"} Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.236934 4827 generic.go:334] "Generic (PLEG): container finished" podID="67ad1b54-a5b6-4464-823e-4bb474694618" containerID="2aadd166260501c20848807e7d1845f75cb3781c7a3f74051607a132e4d8b1a1" exitCode=0 Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.237511 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7g76r" event={"ID":"67ad1b54-a5b6-4464-823e-4bb474694618","Type":"ContainerDied","Data":"2aadd166260501c20848807e7d1845f75cb3781c7a3f74051607a132e4d8b1a1"} Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.277477 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.295098 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-vhmf9"] Jan 31 04:03:43 crc kubenswrapper[4827]: W0131 04:03:43.311611 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35c80c3a_29fe_4992_a421_f5ce7704ff53.slice/crio-1d11943f5b9efa436b7e3b7c2e6024fb6af1876c4d87654a1e68b717d1f9e494 WatchSource:0}: Error finding container 1d11943f5b9efa436b7e3b7c2e6024fb6af1876c4d87654a1e68b717d1f9e494: Status 404 returned error can't find the container with id 1d11943f5b9efa436b7e3b7c2e6024fb6af1876c4d87654a1e68b717d1f9e494 Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.352625 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.447220 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 04:03:43 crc kubenswrapper[4827]: W0131 04:03:43.471406 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5961815_808d_4f79_867c_763e2946d47f.slice/crio-84b526649c76e23d28a3a22ff208e69df623ccd9781f92a3b99029cc863fa3e1 WatchSource:0}: Error finding container 84b526649c76e23d28a3a22ff208e69df623ccd9781f92a3b99029cc863fa3e1: Status 404 returned error can't find the container with id 84b526649c76e23d28a3a22ff208e69df623ccd9781f92a3b99029cc863fa3e1 Jan 31 04:03:43 crc kubenswrapper[4827]: E0131 04:03:43.601439 4827 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 31 04:03:43 crc kubenswrapper[4827]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/67ad1b54-a5b6-4464-823e-4bb474694618/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 31 04:03:43 crc kubenswrapper[4827]: > podSandboxID="9f2a79e75159412eb5ba48439ae44886aa1e7289f58f2f35ae0181bb8eb8b910" Jan 31 04:03:43 crc kubenswrapper[4827]: E0131 04:03:43.601912 4827 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 31 04:03:43 crc kubenswrapper[4827]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmwpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-7g76r_openstack(67ad1b54-a5b6-4464-823e-4bb474694618): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/67ad1b54-a5b6-4464-823e-4bb474694618/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 31 04:03:43 crc kubenswrapper[4827]: > logger="UnhandledError" Jan 31 04:03:43 crc kubenswrapper[4827]: E0131 04:03:43.604971 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/67ad1b54-a5b6-4464-823e-4bb474694618/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-7g76r" podUID="67ad1b54-a5b6-4464-823e-4bb474694618" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.698193 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vzmg4" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.703332 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9cqp6" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.790241 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bbbda85-9e7a-49e1-910d-448fb9b798ac-config\") pod \"3bbbda85-9e7a-49e1-910d-448fb9b798ac\" (UID: \"3bbbda85-9e7a-49e1-910d-448fb9b798ac\") " Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.790408 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74cnz\" (UniqueName: \"kubernetes.io/projected/1779baa3-6433-4593-97d7-01545244f27e-kube-api-access-74cnz\") pod \"1779baa3-6433-4593-97d7-01545244f27e\" (UID: \"1779baa3-6433-4593-97d7-01545244f27e\") " Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.790454 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbg57\" (UniqueName: \"kubernetes.io/projected/3bbbda85-9e7a-49e1-910d-448fb9b798ac-kube-api-access-fbg57\") pod \"3bbbda85-9e7a-49e1-910d-448fb9b798ac\" (UID: \"3bbbda85-9e7a-49e1-910d-448fb9b798ac\") " Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.790490 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bbbda85-9e7a-49e1-910d-448fb9b798ac-dns-svc\") pod \"3bbbda85-9e7a-49e1-910d-448fb9b798ac\" (UID: \"3bbbda85-9e7a-49e1-910d-448fb9b798ac\") " Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.790535 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1779baa3-6433-4593-97d7-01545244f27e-config\") pod \"1779baa3-6433-4593-97d7-01545244f27e\" (UID: \"1779baa3-6433-4593-97d7-01545244f27e\") " Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.790729 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bbbda85-9e7a-49e1-910d-448fb9b798ac-config" (OuterVolumeSpecName: "config") pod "3bbbda85-9e7a-49e1-910d-448fb9b798ac" (UID: "3bbbda85-9e7a-49e1-910d-448fb9b798ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.791121 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bbbda85-9e7a-49e1-910d-448fb9b798ac-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.791597 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bbbda85-9e7a-49e1-910d-448fb9b798ac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3bbbda85-9e7a-49e1-910d-448fb9b798ac" (UID: "3bbbda85-9e7a-49e1-910d-448fb9b798ac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.792059 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1779baa3-6433-4593-97d7-01545244f27e-config" (OuterVolumeSpecName: "config") pod "1779baa3-6433-4593-97d7-01545244f27e" (UID: "1779baa3-6433-4593-97d7-01545244f27e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.798857 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1779baa3-6433-4593-97d7-01545244f27e-kube-api-access-74cnz" (OuterVolumeSpecName: "kube-api-access-74cnz") pod "1779baa3-6433-4593-97d7-01545244f27e" (UID: "1779baa3-6433-4593-97d7-01545244f27e"). InnerVolumeSpecName "kube-api-access-74cnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.798927 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bbbda85-9e7a-49e1-910d-448fb9b798ac-kube-api-access-fbg57" (OuterVolumeSpecName: "kube-api-access-fbg57") pod "3bbbda85-9e7a-49e1-910d-448fb9b798ac" (UID: "3bbbda85-9e7a-49e1-910d-448fb9b798ac"). InnerVolumeSpecName "kube-api-access-fbg57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.848286 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.893971 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74cnz\" (UniqueName: \"kubernetes.io/projected/1779baa3-6433-4593-97d7-01545244f27e-kube-api-access-74cnz\") on node \"crc\" DevicePath \"\"" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.894323 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbg57\" (UniqueName: \"kubernetes.io/projected/3bbbda85-9e7a-49e1-910d-448fb9b798ac-kube-api-access-fbg57\") on node \"crc\" DevicePath \"\"" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.894335 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bbbda85-9e7a-49e1-910d-448fb9b798ac-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:03:43 crc kubenswrapper[4827]: I0131 04:03:43.894345 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1779baa3-6433-4593-97d7-01545244f27e-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:03:44 crc kubenswrapper[4827]: W0131 04:03:44.048605 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f6f952c_d09b_4584_b231_3fb87e5622fd.slice/crio-f8f0508e38a227e0e2885f7efd2273a15516c3d1b5f9c0a7615178df2557b026 WatchSource:0}: Error finding container f8f0508e38a227e0e2885f7efd2273a15516c3d1b5f9c0a7615178df2557b026: Status 404 returned error can't find the container with id f8f0508e38a227e0e2885f7efd2273a15516c3d1b5f9c0a7615178df2557b026 Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.280185 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1f6f952c-d09b-4584-b231-3fb87e5622fd","Type":"ContainerStarted","Data":"f8f0508e38a227e0e2885f7efd2273a15516c3d1b5f9c0a7615178df2557b026"} Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.301991 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"02f954c7-6442-4974-827a-aef4a5690e8c","Type":"ContainerStarted","Data":"a27ba579b7604f3893f963d970d878261c506965e45bbd75c68c7adfd36c31f0"} Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.323850 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vhmf9" event={"ID":"35c80c3a-29fe-4992-a421-f5ce7704ff53","Type":"ContainerStarted","Data":"1d11943f5b9efa436b7e3b7c2e6024fb6af1876c4d87654a1e68b717d1f9e494"} Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.330127 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vzmg4" event={"ID":"3bbbda85-9e7a-49e1-910d-448fb9b798ac","Type":"ContainerDied","Data":"ecaeab5f3756f988285b0973d195a90720c01c76ae92d062cc15480a0b018a35"} Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.330236 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vzmg4" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.356900 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" event={"ID":"ac8573e8-0ab7-4f9b-a060-fcaf95d97e32","Type":"ContainerStarted","Data":"7d65d74786a0c23075197d10796847ea919d63b81e8ca7643c07c2c947885822"} Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.357596 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.378095 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a5961815-808d-4f79-867c-763e2946d47f","Type":"ContainerStarted","Data":"84b526649c76e23d28a3a22ff208e69df623ccd9781f92a3b99029cc863fa3e1"} Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.426607 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9cqp6" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.427834 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-9cqp6" event={"ID":"1779baa3-6433-4593-97d7-01545244f27e","Type":"ContainerDied","Data":"c7aa59253a640fdcc9e5db4ef13ae89a4b41e16777a24f1075812595a27abba4"} Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.440952 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-mkbw4"] Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.443009 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mkbw4" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.446094 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.500745 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mkbw4"] Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.549975 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b818dd8b-a3fb-46fa-a8b2-784fb2d3169d-ovs-rundir\") pod \"ovn-controller-metrics-mkbw4\" (UID: \"b818dd8b-a3fb-46fa-a8b2-784fb2d3169d\") " pod="openstack/ovn-controller-metrics-mkbw4" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.550127 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b818dd8b-a3fb-46fa-a8b2-784fb2d3169d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mkbw4\" (UID: \"b818dd8b-a3fb-46fa-a8b2-784fb2d3169d\") " pod="openstack/ovn-controller-metrics-mkbw4" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.550163 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b818dd8b-a3fb-46fa-a8b2-784fb2d3169d-ovn-rundir\") pod \"ovn-controller-metrics-mkbw4\" (UID: \"b818dd8b-a3fb-46fa-a8b2-784fb2d3169d\") " pod="openstack/ovn-controller-metrics-mkbw4" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.550180 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b818dd8b-a3fb-46fa-a8b2-784fb2d3169d-config\") pod \"ovn-controller-metrics-mkbw4\" (UID: \"b818dd8b-a3fb-46fa-a8b2-784fb2d3169d\") " pod="openstack/ovn-controller-metrics-mkbw4" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.550196 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b818dd8b-a3fb-46fa-a8b2-784fb2d3169d-combined-ca-bundle\") pod \"ovn-controller-metrics-mkbw4\" (UID: \"b818dd8b-a3fb-46fa-a8b2-784fb2d3169d\") " pod="openstack/ovn-controller-metrics-mkbw4" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.550222 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr2hz\" (UniqueName: \"kubernetes.io/projected/b818dd8b-a3fb-46fa-a8b2-784fb2d3169d-kube-api-access-fr2hz\") pod \"ovn-controller-metrics-mkbw4\" (UID: \"b818dd8b-a3fb-46fa-a8b2-784fb2d3169d\") " pod="openstack/ovn-controller-metrics-mkbw4" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.554960 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vzmg4"] Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.566331 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vzmg4"] Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.595040 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9cqp6"] Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.608356 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9cqp6"] Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.652276 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b818dd8b-a3fb-46fa-a8b2-784fb2d3169d-ovs-rundir\") pod \"ovn-controller-metrics-mkbw4\" (UID: \"b818dd8b-a3fb-46fa-a8b2-784fb2d3169d\") " pod="openstack/ovn-controller-metrics-mkbw4" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.652373 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b818dd8b-a3fb-46fa-a8b2-784fb2d3169d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mkbw4\" (UID: \"b818dd8b-a3fb-46fa-a8b2-784fb2d3169d\") " pod="openstack/ovn-controller-metrics-mkbw4" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.652399 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b818dd8b-a3fb-46fa-a8b2-784fb2d3169d-ovn-rundir\") pod \"ovn-controller-metrics-mkbw4\" (UID: \"b818dd8b-a3fb-46fa-a8b2-784fb2d3169d\") " pod="openstack/ovn-controller-metrics-mkbw4" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.652418 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b818dd8b-a3fb-46fa-a8b2-784fb2d3169d-config\") pod \"ovn-controller-metrics-mkbw4\" (UID: \"b818dd8b-a3fb-46fa-a8b2-784fb2d3169d\") " pod="openstack/ovn-controller-metrics-mkbw4" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.652433 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b818dd8b-a3fb-46fa-a8b2-784fb2d3169d-combined-ca-bundle\") pod \"ovn-controller-metrics-mkbw4\" (UID: \"b818dd8b-a3fb-46fa-a8b2-784fb2d3169d\") " pod="openstack/ovn-controller-metrics-mkbw4" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.652452 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr2hz\" (UniqueName: \"kubernetes.io/projected/b818dd8b-a3fb-46fa-a8b2-784fb2d3169d-kube-api-access-fr2hz\") pod \"ovn-controller-metrics-mkbw4\" (UID: \"b818dd8b-a3fb-46fa-a8b2-784fb2d3169d\") " pod="openstack/ovn-controller-metrics-mkbw4" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.652637 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b818dd8b-a3fb-46fa-a8b2-784fb2d3169d-ovs-rundir\") pod \"ovn-controller-metrics-mkbw4\" (UID: \"b818dd8b-a3fb-46fa-a8b2-784fb2d3169d\") " pod="openstack/ovn-controller-metrics-mkbw4" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.652905 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b818dd8b-a3fb-46fa-a8b2-784fb2d3169d-ovn-rundir\") pod \"ovn-controller-metrics-mkbw4\" (UID: \"b818dd8b-a3fb-46fa-a8b2-784fb2d3169d\") " pod="openstack/ovn-controller-metrics-mkbw4" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.654135 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b818dd8b-a3fb-46fa-a8b2-784fb2d3169d-config\") pod \"ovn-controller-metrics-mkbw4\" (UID: \"b818dd8b-a3fb-46fa-a8b2-784fb2d3169d\") " pod="openstack/ovn-controller-metrics-mkbw4" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.662129 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b818dd8b-a3fb-46fa-a8b2-784fb2d3169d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mkbw4\" (UID: \"b818dd8b-a3fb-46fa-a8b2-784fb2d3169d\") " pod="openstack/ovn-controller-metrics-mkbw4" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.662434 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b818dd8b-a3fb-46fa-a8b2-784fb2d3169d-combined-ca-bundle\") pod \"ovn-controller-metrics-mkbw4\" (UID: \"b818dd8b-a3fb-46fa-a8b2-784fb2d3169d\") " pod="openstack/ovn-controller-metrics-mkbw4" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.669026 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bl8wf"] Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.669307 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" podStartSLOduration=3.64875901 podStartE2EDuration="15.669291007s" podCreationTimestamp="2026-01-31 04:03:29 +0000 UTC" firstStartedPulling="2026-01-31 04:03:30.472615139 +0000 UTC m=+1003.159695598" lastFinishedPulling="2026-01-31 04:03:42.493147146 +0000 UTC m=+1015.180227595" observedRunningTime="2026-01-31 04:03:44.615805387 +0000 UTC m=+1017.302885836" watchObservedRunningTime="2026-01-31 04:03:44.669291007 +0000 UTC m=+1017.356371456" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.682137 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr2hz\" (UniqueName: \"kubernetes.io/projected/b818dd8b-a3fb-46fa-a8b2-784fb2d3169d-kube-api-access-fr2hz\") pod \"ovn-controller-metrics-mkbw4\" (UID: \"b818dd8b-a3fb-46fa-a8b2-784fb2d3169d\") " pod="openstack/ovn-controller-metrics-mkbw4" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.685721 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-d9tcc"] Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.689098 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.692560 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.702453 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-d9tcc"] Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.743623 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7g76r"] Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.767932 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wzx4f"] Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.774791 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mkbw4" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.775255 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.778465 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.785358 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wzx4f"] Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.861432 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-config\") pod \"dnsmasq-dns-7f896c8c65-d9tcc\" (UID: \"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7\") " pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.861472 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-d9tcc\" (UID: \"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7\") " pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.861799 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxw8h\" (UniqueName: \"kubernetes.io/projected/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-kube-api-access-hxw8h\") pod \"dnsmasq-dns-7f896c8c65-d9tcc\" (UID: \"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7\") " pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.861945 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-d9tcc\" (UID: \"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7\") " pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.963741 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-config\") pod \"dnsmasq-dns-86db49b7ff-wzx4f\" (UID: \"7ff119c1-4543-4413-94df-a2cf5ca523d5\") " pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.963792 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-wzx4f\" (UID: \"7ff119c1-4543-4413-94df-a2cf5ca523d5\") " pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.964005 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-wzx4f\" (UID: \"7ff119c1-4543-4413-94df-a2cf5ca523d5\") " pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.964080 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-config\") pod \"dnsmasq-dns-7f896c8c65-d9tcc\" (UID: \"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7\") " pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.964113 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-d9tcc\" (UID: \"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7\") " pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.964171 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-wzx4f\" (UID: \"7ff119c1-4543-4413-94df-a2cf5ca523d5\") " pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.964308 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxw8h\" (UniqueName: \"kubernetes.io/projected/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-kube-api-access-hxw8h\") pod \"dnsmasq-dns-7f896c8c65-d9tcc\" (UID: \"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7\") " pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.964356 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-d9tcc\" (UID: \"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7\") " pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.964444 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqpz5\" (UniqueName: \"kubernetes.io/projected/7ff119c1-4543-4413-94df-a2cf5ca523d5-kube-api-access-rqpz5\") pod \"dnsmasq-dns-86db49b7ff-wzx4f\" (UID: \"7ff119c1-4543-4413-94df-a2cf5ca523d5\") " pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.964942 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-config\") pod \"dnsmasq-dns-7f896c8c65-d9tcc\" (UID: \"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7\") " pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.965515 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-d9tcc\" (UID: \"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7\") " pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.965759 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-d9tcc\" (UID: \"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7\") " pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" Jan 31 04:03:44 crc kubenswrapper[4827]: I0131 04:03:44.985034 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxw8h\" (UniqueName: \"kubernetes.io/projected/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-kube-api-access-hxw8h\") pod \"dnsmasq-dns-7f896c8c65-d9tcc\" (UID: \"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7\") " pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" Jan 31 04:03:45 crc kubenswrapper[4827]: I0131 04:03:45.061154 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" Jan 31 04:03:45 crc kubenswrapper[4827]: I0131 04:03:45.066160 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqpz5\" (UniqueName: \"kubernetes.io/projected/7ff119c1-4543-4413-94df-a2cf5ca523d5-kube-api-access-rqpz5\") pod \"dnsmasq-dns-86db49b7ff-wzx4f\" (UID: \"7ff119c1-4543-4413-94df-a2cf5ca523d5\") " pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" Jan 31 04:03:45 crc kubenswrapper[4827]: I0131 04:03:45.066291 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-config\") pod \"dnsmasq-dns-86db49b7ff-wzx4f\" (UID: \"7ff119c1-4543-4413-94df-a2cf5ca523d5\") " pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" Jan 31 04:03:45 crc kubenswrapper[4827]: I0131 04:03:45.066318 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-wzx4f\" (UID: \"7ff119c1-4543-4413-94df-a2cf5ca523d5\") " pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" Jan 31 04:03:45 crc kubenswrapper[4827]: I0131 04:03:45.066428 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-wzx4f\" (UID: \"7ff119c1-4543-4413-94df-a2cf5ca523d5\") " pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" Jan 31 04:03:45 crc kubenswrapper[4827]: I0131 04:03:45.066473 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-wzx4f\" (UID: \"7ff119c1-4543-4413-94df-a2cf5ca523d5\") " pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" Jan 31 04:03:45 crc kubenswrapper[4827]: I0131 04:03:45.067189 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-wzx4f\" (UID: \"7ff119c1-4543-4413-94df-a2cf5ca523d5\") " pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" Jan 31 04:03:45 crc kubenswrapper[4827]: I0131 04:03:45.067735 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-wzx4f\" (UID: \"7ff119c1-4543-4413-94df-a2cf5ca523d5\") " pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" Jan 31 04:03:45 crc kubenswrapper[4827]: I0131 04:03:45.068434 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-wzx4f\" (UID: \"7ff119c1-4543-4413-94df-a2cf5ca523d5\") " pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" Jan 31 04:03:45 crc kubenswrapper[4827]: I0131 04:03:45.068836 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-config\") pod \"dnsmasq-dns-86db49b7ff-wzx4f\" (UID: \"7ff119c1-4543-4413-94df-a2cf5ca523d5\") " pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" Jan 31 04:03:45 crc kubenswrapper[4827]: I0131 04:03:45.083637 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqpz5\" (UniqueName: \"kubernetes.io/projected/7ff119c1-4543-4413-94df-a2cf5ca523d5-kube-api-access-rqpz5\") pod \"dnsmasq-dns-86db49b7ff-wzx4f\" (UID: \"7ff119c1-4543-4413-94df-a2cf5ca523d5\") " pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" Jan 31 04:03:45 crc kubenswrapper[4827]: I0131 04:03:45.088420 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" Jan 31 04:03:46 crc kubenswrapper[4827]: I0131 04:03:46.120217 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1779baa3-6433-4593-97d7-01545244f27e" path="/var/lib/kubelet/pods/1779baa3-6433-4593-97d7-01545244f27e/volumes" Jan 31 04:03:46 crc kubenswrapper[4827]: I0131 04:03:46.121070 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bbbda85-9e7a-49e1-910d-448fb9b798ac" path="/var/lib/kubelet/pods/3bbbda85-9e7a-49e1-910d-448fb9b798ac/volumes" Jan 31 04:03:46 crc kubenswrapper[4827]: I0131 04:03:46.435729 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" podUID="ac8573e8-0ab7-4f9b-a060-fcaf95d97e32" containerName="dnsmasq-dns" containerID="cri-o://7d65d74786a0c23075197d10796847ea919d63b81e8ca7643c07c2c947885822" gracePeriod=10 Jan 31 04:03:47 crc kubenswrapper[4827]: I0131 04:03:47.444148 4827 generic.go:334] "Generic (PLEG): container finished" podID="ac8573e8-0ab7-4f9b-a060-fcaf95d97e32" containerID="7d65d74786a0c23075197d10796847ea919d63b81e8ca7643c07c2c947885822" exitCode=0 Jan 31 04:03:47 crc kubenswrapper[4827]: I0131 04:03:47.444197 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" event={"ID":"ac8573e8-0ab7-4f9b-a060-fcaf95d97e32","Type":"ContainerDied","Data":"7d65d74786a0c23075197d10796847ea919d63b81e8ca7643c07c2c947885822"} Jan 31 04:03:50 crc kubenswrapper[4827]: I0131 04:03:50.839834 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" Jan 31 04:03:50 crc kubenswrapper[4827]: I0131 04:03:50.899641 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac8573e8-0ab7-4f9b-a060-fcaf95d97e32-dns-svc\") pod \"ac8573e8-0ab7-4f9b-a060-fcaf95d97e32\" (UID: \"ac8573e8-0ab7-4f9b-a060-fcaf95d97e32\") " Jan 31 04:03:50 crc kubenswrapper[4827]: I0131 04:03:50.899866 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2rzd\" (UniqueName: \"kubernetes.io/projected/ac8573e8-0ab7-4f9b-a060-fcaf95d97e32-kube-api-access-h2rzd\") pod \"ac8573e8-0ab7-4f9b-a060-fcaf95d97e32\" (UID: \"ac8573e8-0ab7-4f9b-a060-fcaf95d97e32\") " Jan 31 04:03:50 crc kubenswrapper[4827]: I0131 04:03:50.900117 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac8573e8-0ab7-4f9b-a060-fcaf95d97e32-config\") pod \"ac8573e8-0ab7-4f9b-a060-fcaf95d97e32\" (UID: \"ac8573e8-0ab7-4f9b-a060-fcaf95d97e32\") " Jan 31 04:03:50 crc kubenswrapper[4827]: I0131 04:03:50.917076 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac8573e8-0ab7-4f9b-a060-fcaf95d97e32-kube-api-access-h2rzd" (OuterVolumeSpecName: "kube-api-access-h2rzd") pod "ac8573e8-0ab7-4f9b-a060-fcaf95d97e32" (UID: "ac8573e8-0ab7-4f9b-a060-fcaf95d97e32"). InnerVolumeSpecName "kube-api-access-h2rzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:03:50 crc kubenswrapper[4827]: I0131 04:03:50.941439 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac8573e8-0ab7-4f9b-a060-fcaf95d97e32-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac8573e8-0ab7-4f9b-a060-fcaf95d97e32" (UID: "ac8573e8-0ab7-4f9b-a060-fcaf95d97e32"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:03:50 crc kubenswrapper[4827]: I0131 04:03:50.945676 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac8573e8-0ab7-4f9b-a060-fcaf95d97e32-config" (OuterVolumeSpecName: "config") pod "ac8573e8-0ab7-4f9b-a060-fcaf95d97e32" (UID: "ac8573e8-0ab7-4f9b-a060-fcaf95d97e32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:03:51 crc kubenswrapper[4827]: I0131 04:03:51.001847 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2rzd\" (UniqueName: \"kubernetes.io/projected/ac8573e8-0ab7-4f9b-a060-fcaf95d97e32-kube-api-access-h2rzd\") on node \"crc\" DevicePath \"\"" Jan 31 04:03:51 crc kubenswrapper[4827]: I0131 04:03:51.001898 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac8573e8-0ab7-4f9b-a060-fcaf95d97e32-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:03:51 crc kubenswrapper[4827]: I0131 04:03:51.001910 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac8573e8-0ab7-4f9b-a060-fcaf95d97e32-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:03:51 crc kubenswrapper[4827]: I0131 04:03:51.457497 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mkbw4"] Jan 31 04:03:51 crc kubenswrapper[4827]: I0131 04:03:51.478095 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" event={"ID":"ac8573e8-0ab7-4f9b-a060-fcaf95d97e32","Type":"ContainerDied","Data":"a0c3580a2758bbc2e80d7b402ef4e1f80a293dbc02c7f0a85d5517e265de1e81"} Jan 31 04:03:51 crc kubenswrapper[4827]: I0131 04:03:51.478156 4827 scope.go:117] "RemoveContainer" containerID="7d65d74786a0c23075197d10796847ea919d63b81e8ca7643c07c2c947885822" Jan 31 04:03:51 crc kubenswrapper[4827]: I0131 04:03:51.478298 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" Jan 31 04:03:51 crc kubenswrapper[4827]: I0131 04:03:51.531545 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bl8wf"] Jan 31 04:03:51 crc kubenswrapper[4827]: I0131 04:03:51.536747 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bl8wf"] Jan 31 04:03:51 crc kubenswrapper[4827]: I0131 04:03:51.594705 4827 scope.go:117] "RemoveContainer" containerID="cd0c4916f00769c88e9beaa491a72bf77377db2726f3e3e6b910b4e9bbb86bb1" Jan 31 04:03:51 crc kubenswrapper[4827]: I0131 04:03:51.853956 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wzx4f"] Jan 31 04:03:51 crc kubenswrapper[4827]: I0131 04:03:51.895685 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-d9tcc"] Jan 31 04:03:51 crc kubenswrapper[4827]: W0131 04:03:51.955145 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2240f9c7_2ffe_44cb_9a4b_e350bdd3bfa7.slice/crio-fdc4f1bd7cab304ed17ae5ee9c4c44df655ad421d6647b96bbbfa231baf9c207 WatchSource:0}: Error finding container fdc4f1bd7cab304ed17ae5ee9c4c44df655ad421d6647b96bbbfa231baf9c207: Status 404 returned error can't find the container with id fdc4f1bd7cab304ed17ae5ee9c4c44df655ad421d6647b96bbbfa231baf9c207 Jan 31 04:03:52 crc kubenswrapper[4827]: I0131 04:03:52.116850 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac8573e8-0ab7-4f9b-a060-fcaf95d97e32" path="/var/lib/kubelet/pods/ac8573e8-0ab7-4f9b-a060-fcaf95d97e32/volumes" Jan 31 04:03:52 crc kubenswrapper[4827]: I0131 04:03:52.487788 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" event={"ID":"7ff119c1-4543-4413-94df-a2cf5ca523d5","Type":"ContainerStarted","Data":"df737f0210576a98463d31ea497083dfc0c94a4489288b6a61467b9b29d0ee5d"} Jan 31 04:03:52 crc kubenswrapper[4827]: I0131 04:03:52.489359 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mkbw4" event={"ID":"b818dd8b-a3fb-46fa-a8b2-784fb2d3169d","Type":"ContainerStarted","Data":"b975f924cc2d9b55ca7e4bccd6ca1301f05040f47eea53ce8b8ad0d2214a5148"} Jan 31 04:03:52 crc kubenswrapper[4827]: I0131 04:03:52.491661 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7g76r" event={"ID":"67ad1b54-a5b6-4464-823e-4bb474694618","Type":"ContainerStarted","Data":"f2dd71b141b337760b385c6cc19e49c9525442c5340fd1446335abbb0d104557"} Jan 31 04:03:52 crc kubenswrapper[4827]: I0131 04:03:52.491788 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-7g76r" podUID="67ad1b54-a5b6-4464-823e-4bb474694618" containerName="dnsmasq-dns" containerID="cri-o://f2dd71b141b337760b385c6cc19e49c9525442c5340fd1446335abbb0d104557" gracePeriod=10 Jan 31 04:03:52 crc kubenswrapper[4827]: I0131 04:03:52.492067 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-7g76r" Jan 31 04:03:52 crc kubenswrapper[4827]: I0131 04:03:52.493507 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" event={"ID":"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7","Type":"ContainerStarted","Data":"fdc4f1bd7cab304ed17ae5ee9c4c44df655ad421d6647b96bbbfa231baf9c207"} Jan 31 04:03:52 crc kubenswrapper[4827]: I0131 04:03:52.496318 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vhmf9" event={"ID":"35c80c3a-29fe-4992-a421-f5ce7704ff53","Type":"ContainerStarted","Data":"1c52ba734c14628413014582994a29a86b31bdb375191b2bcb9f6c13d5a1eb8a"} Jan 31 04:03:52 crc kubenswrapper[4827]: I0131 04:03:52.517914 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-7g76r" podStartSLOduration=11.367945035 podStartE2EDuration="23.517894699s" podCreationTimestamp="2026-01-31 04:03:29 +0000 UTC" firstStartedPulling="2026-01-31 04:03:30.336980351 +0000 UTC m=+1003.024060800" lastFinishedPulling="2026-01-31 04:03:42.486930015 +0000 UTC m=+1015.174010464" observedRunningTime="2026-01-31 04:03:52.513093792 +0000 UTC m=+1025.200174241" watchObservedRunningTime="2026-01-31 04:03:52.517894699 +0000 UTC m=+1025.204975148" Jan 31 04:03:53 crc kubenswrapper[4827]: I0131 04:03:53.266083 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7g76r" Jan 31 04:03:53 crc kubenswrapper[4827]: I0131 04:03:53.459711 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmwpl\" (UniqueName: \"kubernetes.io/projected/67ad1b54-a5b6-4464-823e-4bb474694618-kube-api-access-qmwpl\") pod \"67ad1b54-a5b6-4464-823e-4bb474694618\" (UID: \"67ad1b54-a5b6-4464-823e-4bb474694618\") " Jan 31 04:03:53 crc kubenswrapper[4827]: I0131 04:03:53.459806 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67ad1b54-a5b6-4464-823e-4bb474694618-config\") pod \"67ad1b54-a5b6-4464-823e-4bb474694618\" (UID: \"67ad1b54-a5b6-4464-823e-4bb474694618\") " Jan 31 04:03:53 crc kubenswrapper[4827]: I0131 04:03:53.459838 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67ad1b54-a5b6-4464-823e-4bb474694618-dns-svc\") pod \"67ad1b54-a5b6-4464-823e-4bb474694618\" (UID: \"67ad1b54-a5b6-4464-823e-4bb474694618\") " Jan 31 04:03:53 crc kubenswrapper[4827]: I0131 04:03:53.486286 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67ad1b54-a5b6-4464-823e-4bb474694618-kube-api-access-qmwpl" (OuterVolumeSpecName: "kube-api-access-qmwpl") pod "67ad1b54-a5b6-4464-823e-4bb474694618" (UID: "67ad1b54-a5b6-4464-823e-4bb474694618"). InnerVolumeSpecName "kube-api-access-qmwpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:03:53 crc kubenswrapper[4827]: I0131 04:03:53.506226 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"220c4c53-ac13-4f85-88da-38fef6ce70b1","Type":"ContainerStarted","Data":"d734b35715a6cb079ebb086a74d2e4599d9990a3d740a06baf816fbc6a2f568c"} Jan 31 04:03:53 crc kubenswrapper[4827]: I0131 04:03:53.506357 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 31 04:03:53 crc kubenswrapper[4827]: I0131 04:03:53.509093 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1f6f952c-d09b-4584-b231-3fb87e5622fd","Type":"ContainerStarted","Data":"48167a3c3909d73d126fbe3b1bf5b37025007dec52bec42c719846a60b25b0e0"} Jan 31 04:03:53 crc kubenswrapper[4827]: I0131 04:03:53.516291 4827 generic.go:334] "Generic (PLEG): container finished" podID="67ad1b54-a5b6-4464-823e-4bb474694618" containerID="f2dd71b141b337760b385c6cc19e49c9525442c5340fd1446335abbb0d104557" exitCode=0 Jan 31 04:03:53 crc kubenswrapper[4827]: I0131 04:03:53.516377 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7g76r" event={"ID":"67ad1b54-a5b6-4464-823e-4bb474694618","Type":"ContainerDied","Data":"f2dd71b141b337760b385c6cc19e49c9525442c5340fd1446335abbb0d104557"} Jan 31 04:03:53 crc kubenswrapper[4827]: I0131 04:03:53.516406 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7g76r" event={"ID":"67ad1b54-a5b6-4464-823e-4bb474694618","Type":"ContainerDied","Data":"9f2a79e75159412eb5ba48439ae44886aa1e7289f58f2f35ae0181bb8eb8b910"} Jan 31 04:03:53 crc kubenswrapper[4827]: I0131 04:03:53.516426 4827 scope.go:117] "RemoveContainer" containerID="f2dd71b141b337760b385c6cc19e49c9525442c5340fd1446335abbb0d104557" Jan 31 04:03:53 crc kubenswrapper[4827]: I0131 04:03:53.516554 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7g76r" Jan 31 04:03:53 crc kubenswrapper[4827]: I0131 04:03:53.528423 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.179777652 podStartE2EDuration="20.528404407s" podCreationTimestamp="2026-01-31 04:03:33 +0000 UTC" firstStartedPulling="2026-01-31 04:03:42.944756264 +0000 UTC m=+1015.631836723" lastFinishedPulling="2026-01-31 04:03:51.293383029 +0000 UTC m=+1023.980463478" observedRunningTime="2026-01-31 04:03:53.525925481 +0000 UTC m=+1026.213005940" watchObservedRunningTime="2026-01-31 04:03:53.528404407 +0000 UTC m=+1026.215484856" Jan 31 04:03:53 crc kubenswrapper[4827]: I0131 04:03:53.528583 4827 generic.go:334] "Generic (PLEG): container finished" podID="35c80c3a-29fe-4992-a421-f5ce7704ff53" containerID="1c52ba734c14628413014582994a29a86b31bdb375191b2bcb9f6c13d5a1eb8a" exitCode=0 Jan 31 04:03:53 crc kubenswrapper[4827]: I0131 04:03:53.528633 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vhmf9" event={"ID":"35c80c3a-29fe-4992-a421-f5ce7704ff53","Type":"ContainerDied","Data":"1c52ba734c14628413014582994a29a86b31bdb375191b2bcb9f6c13d5a1eb8a"} Jan 31 04:03:53 crc kubenswrapper[4827]: I0131 04:03:53.560777 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmwpl\" (UniqueName: \"kubernetes.io/projected/67ad1b54-a5b6-4464-823e-4bb474694618-kube-api-access-qmwpl\") on node \"crc\" DevicePath \"\"" Jan 31 04:03:54 crc kubenswrapper[4827]: I0131 04:03:54.390051 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67ad1b54-a5b6-4464-823e-4bb474694618-config" (OuterVolumeSpecName: "config") pod "67ad1b54-a5b6-4464-823e-4bb474694618" (UID: "67ad1b54-a5b6-4464-823e-4bb474694618"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:03:54 crc kubenswrapper[4827]: I0131 04:03:54.474999 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67ad1b54-a5b6-4464-823e-4bb474694618-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:03:54 crc kubenswrapper[4827]: I0131 04:03:54.546060 4827 generic.go:334] "Generic (PLEG): container finished" podID="2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7" containerID="33515604c1dd1b56ca4d3339c3d998b428abb27951ae28b7cabed4c64998e50a" exitCode=0 Jan 31 04:03:54 crc kubenswrapper[4827]: I0131 04:03:54.552679 4827 generic.go:334] "Generic (PLEG): container finished" podID="7ff119c1-4543-4413-94df-a2cf5ca523d5" containerID="457857ebe4d516c4060bb88882652c2ff255647be0eb2aeb02649a6af506c702" exitCode=0 Jan 31 04:03:54 crc kubenswrapper[4827]: I0131 04:03:54.560805 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-jrrb4" podStartSLOduration=7.348330185 podStartE2EDuration="15.560779255s" podCreationTimestamp="2026-01-31 04:03:39 +0000 UTC" firstStartedPulling="2026-01-31 04:03:43.18628033 +0000 UTC m=+1015.873360779" lastFinishedPulling="2026-01-31 04:03:51.39872941 +0000 UTC m=+1024.085809849" observedRunningTime="2026-01-31 04:03:54.555166723 +0000 UTC m=+1027.242247182" watchObservedRunningTime="2026-01-31 04:03:54.560779255 +0000 UTC m=+1027.247859714" Jan 31 04:03:54 crc kubenswrapper[4827]: I0131 04:03:54.623734 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67ad1b54-a5b6-4464-823e-4bb474694618-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "67ad1b54-a5b6-4464-823e-4bb474694618" (UID: "67ad1b54-a5b6-4464-823e-4bb474694618"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:03:54 crc kubenswrapper[4827]: I0131 04:03:54.694556 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67ad1b54-a5b6-4464-823e-4bb474694618-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:03:54 crc kubenswrapper[4827]: I0131 04:03:54.703290 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=9.884000552 podStartE2EDuration="19.703264514s" podCreationTimestamp="2026-01-31 04:03:35 +0000 UTC" firstStartedPulling="2026-01-31 04:03:42.942782484 +0000 UTC m=+1015.629862933" lastFinishedPulling="2026-01-31 04:03:52.762046436 +0000 UTC m=+1025.449126895" observedRunningTime="2026-01-31 04:03:54.699252752 +0000 UTC m=+1027.386333231" watchObservedRunningTime="2026-01-31 04:03:54.703264514 +0000 UTC m=+1027.390344973" Jan 31 04:03:54 crc kubenswrapper[4827]: I0131 04:03:54.738367 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-jrrb4" Jan 31 04:03:54 crc kubenswrapper[4827]: I0131 04:03:54.738408 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jrrb4" event={"ID":"b0a1bcac-47e2-4089-ae1e-98a2dc41d270","Type":"ContainerStarted","Data":"a29ed0c5615cd97698ca15c90724cfb55174d28414b4a9089d91b68ef7eefff2"} Jan 31 04:03:54 crc kubenswrapper[4827]: I0131 04:03:54.738434 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a0d3d60f-16f2-469d-8314-9055bb91a9ce","Type":"ContainerStarted","Data":"4b9d0720291d8a3aca054f46ddb2fc8386b50ddf6d6c6fd9ff7622e9873faa41"} Jan 31 04:03:54 crc kubenswrapper[4827]: I0131 04:03:54.738467 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 31 04:03:54 crc kubenswrapper[4827]: I0131 04:03:54.738494 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"46ae0722-07b9-42d3-9ca9-d6a07cd0aa12","Type":"ContainerStarted","Data":"48c89a72ab34c2cc54e43adfe24b2018621efb76c90000d5efd07d87c5ed1a44"} Jan 31 04:03:54 crc kubenswrapper[4827]: I0131 04:03:54.738508 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"02f954c7-6442-4974-827a-aef4a5690e8c","Type":"ContainerStarted","Data":"a3c94b6b53a56855eba7c9871d6fbc9d197f581c259d27c512976ebe225e2f9f"} Jan 31 04:03:54 crc kubenswrapper[4827]: I0131 04:03:54.738553 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" event={"ID":"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7","Type":"ContainerDied","Data":"33515604c1dd1b56ca4d3339c3d998b428abb27951ae28b7cabed4c64998e50a"} Jan 31 04:03:54 crc kubenswrapper[4827]: I0131 04:03:54.738572 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f66333b7-3406-4a69-85f5-0806b992a625","Type":"ContainerStarted","Data":"408b4cf086c11dfa44a43b478ab8bcff091c43a5e6b7d193bfc3684767005382"} Jan 31 04:03:54 crc kubenswrapper[4827]: I0131 04:03:54.738586 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a5961815-808d-4f79-867c-763e2946d47f","Type":"ContainerStarted","Data":"413bc37518bb0076f602c39e963ecc652b59bbdd46bd32670f0b8757dddc562e"} Jan 31 04:03:54 crc kubenswrapper[4827]: I0131 04:03:54.738604 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" event={"ID":"7ff119c1-4543-4413-94df-a2cf5ca523d5","Type":"ContainerDied","Data":"457857ebe4d516c4060bb88882652c2ff255647be0eb2aeb02649a6af506c702"} Jan 31 04:03:54 crc kubenswrapper[4827]: I0131 04:03:54.773108 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7g76r"] Jan 31 04:03:54 crc kubenswrapper[4827]: I0131 04:03:54.779995 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7g76r"] Jan 31 04:03:54 crc kubenswrapper[4827]: I0131 04:03:54.890675 4827 scope.go:117] "RemoveContainer" containerID="2aadd166260501c20848807e7d1845f75cb3781c7a3f74051607a132e4d8b1a1" Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.105939 4827 scope.go:117] "RemoveContainer" containerID="f2dd71b141b337760b385c6cc19e49c9525442c5340fd1446335abbb0d104557" Jan 31 04:03:55 crc kubenswrapper[4827]: E0131 04:03:55.106485 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2dd71b141b337760b385c6cc19e49c9525442c5340fd1446335abbb0d104557\": container with ID starting with f2dd71b141b337760b385c6cc19e49c9525442c5340fd1446335abbb0d104557 not found: ID does not exist" containerID="f2dd71b141b337760b385c6cc19e49c9525442c5340fd1446335abbb0d104557" Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.106560 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2dd71b141b337760b385c6cc19e49c9525442c5340fd1446335abbb0d104557"} err="failed to get container status \"f2dd71b141b337760b385c6cc19e49c9525442c5340fd1446335abbb0d104557\": rpc error: code = NotFound desc = could not find container \"f2dd71b141b337760b385c6cc19e49c9525442c5340fd1446335abbb0d104557\": container with ID starting with f2dd71b141b337760b385c6cc19e49c9525442c5340fd1446335abbb0d104557 not found: ID does not exist" Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.106593 4827 scope.go:117] "RemoveContainer" containerID="2aadd166260501c20848807e7d1845f75cb3781c7a3f74051607a132e4d8b1a1" Jan 31 04:03:55 crc kubenswrapper[4827]: E0131 04:03:55.107165 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aadd166260501c20848807e7d1845f75cb3781c7a3f74051607a132e4d8b1a1\": container with ID starting with 2aadd166260501c20848807e7d1845f75cb3781c7a3f74051607a132e4d8b1a1 not found: ID does not exist" containerID="2aadd166260501c20848807e7d1845f75cb3781c7a3f74051607a132e4d8b1a1" Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.107195 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aadd166260501c20848807e7d1845f75cb3781c7a3f74051607a132e4d8b1a1"} err="failed to get container status \"2aadd166260501c20848807e7d1845f75cb3781c7a3f74051607a132e4d8b1a1\": rpc error: code = NotFound desc = could not find container \"2aadd166260501c20848807e7d1845f75cb3781c7a3f74051607a132e4d8b1a1\": container with ID starting with 2aadd166260501c20848807e7d1845f75cb3781c7a3f74051607a132e4d8b1a1 not found: ID does not exist" Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.126415 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-bl8wf" podUID="ac8573e8-0ab7-4f9b-a060-fcaf95d97e32" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.98:5353: i/o timeout" Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.565714 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1f6f952c-d09b-4584-b231-3fb87e5622fd","Type":"ContainerStarted","Data":"05d84cd6e763b0ee48bf094951b6ecd3d84ca62d8db47f20f4a87cd7f319461d"} Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.569413 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" event={"ID":"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7","Type":"ContainerStarted","Data":"c0b5c45b7df090aabfcd8fdbd4bf522edad03bc6170b82d7b50cc08bfd95b626"} Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.569548 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.571582 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vhmf9" event={"ID":"35c80c3a-29fe-4992-a421-f5ce7704ff53","Type":"ContainerStarted","Data":"4707ba16340e9feee88925b3c94738054695ff475a4a84db661f5f9e8451db72"} Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.571715 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.571802 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.571896 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vhmf9" event={"ID":"35c80c3a-29fe-4992-a421-f5ce7704ff53","Type":"ContainerStarted","Data":"f2254d092552cf745d11dbd6b3c91c22e3e687b7109fe4f298e91a378007cc03"} Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.573868 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a5961815-808d-4f79-867c-763e2946d47f","Type":"ContainerStarted","Data":"00dac4c62d27b67804fe5072288724a85248b7dc33698f86e4e79aedaf990735"} Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.575453 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" event={"ID":"7ff119c1-4543-4413-94df-a2cf5ca523d5","Type":"ContainerStarted","Data":"6df7da264918a0e3d7e1508ecf6ea25c7ba35e6162996f3075ed63e3d577a1a1"} Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.575552 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.577024 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mkbw4" event={"ID":"b818dd8b-a3fb-46fa-a8b2-784fb2d3169d","Type":"ContainerStarted","Data":"2b55f658f7b97f2275ce04613d6e3a3498ff22990fae5d8876ff6726322d8c89"} Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.578197 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"237362ad-03ab-48a0-916d-1b140b4727d5","Type":"ContainerStarted","Data":"12fe2b54071f9726320953b5647ef853505a4030977779bf309ca7bddd85c632"} Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.592396 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.653320789 podStartE2EDuration="14.59237422s" podCreationTimestamp="2026-01-31 04:03:41 +0000 UTC" firstStartedPulling="2026-01-31 04:03:44.060674374 +0000 UTC m=+1016.747754823" lastFinishedPulling="2026-01-31 04:03:54.999727805 +0000 UTC m=+1027.686808254" observedRunningTime="2026-01-31 04:03:55.58651719 +0000 UTC m=+1028.273597659" watchObservedRunningTime="2026-01-31 04:03:55.59237422 +0000 UTC m=+1028.279454669" Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.604679 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" podStartSLOduration=11.604662966 podStartE2EDuration="11.604662966s" podCreationTimestamp="2026-01-31 04:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:03:55.602058536 +0000 UTC m=+1028.289139005" watchObservedRunningTime="2026-01-31 04:03:55.604662966 +0000 UTC m=+1028.291743415" Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.630434 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-vhmf9" podStartSLOduration=8.5537221 podStartE2EDuration="16.630411906s" podCreationTimestamp="2026-01-31 04:03:39 +0000 UTC" firstStartedPulling="2026-01-31 04:03:43.31997175 +0000 UTC m=+1016.007052199" lastFinishedPulling="2026-01-31 04:03:51.396661556 +0000 UTC m=+1024.083742005" observedRunningTime="2026-01-31 04:03:55.626172436 +0000 UTC m=+1028.313252925" watchObservedRunningTime="2026-01-31 04:03:55.630411906 +0000 UTC m=+1028.317492355" Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.669060 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" podStartSLOduration=11.66904563 podStartE2EDuration="11.66904563s" podCreationTimestamp="2026-01-31 04:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:03:55.646060705 +0000 UTC m=+1028.333141154" watchObservedRunningTime="2026-01-31 04:03:55.66904563 +0000 UTC m=+1028.356126079" Jan 31 04:03:55 crc kubenswrapper[4827]: I0131 04:03:55.714552 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-mkbw4" podStartSLOduration=8.340543241 podStartE2EDuration="11.714533945s" podCreationTimestamp="2026-01-31 04:03:44 +0000 UTC" firstStartedPulling="2026-01-31 04:03:51.594767541 +0000 UTC m=+1024.281847990" lastFinishedPulling="2026-01-31 04:03:54.968758245 +0000 UTC m=+1027.655838694" observedRunningTime="2026-01-31 04:03:55.684765373 +0000 UTC m=+1028.371845842" watchObservedRunningTime="2026-01-31 04:03:55.714533945 +0000 UTC m=+1028.401614394" Jan 31 04:03:56 crc kubenswrapper[4827]: I0131 04:03:56.122800 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67ad1b54-a5b6-4464-823e-4bb474694618" path="/var/lib/kubelet/pods/67ad1b54-a5b6-4464-823e-4bb474694618/volumes" Jan 31 04:03:57 crc kubenswrapper[4827]: I0131 04:03:57.593927 4827 generic.go:334] "Generic (PLEG): container finished" podID="f66333b7-3406-4a69-85f5-0806b992a625" containerID="408b4cf086c11dfa44a43b478ab8bcff091c43a5e6b7d193bfc3684767005382" exitCode=0 Jan 31 04:03:57 crc kubenswrapper[4827]: I0131 04:03:57.594005 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f66333b7-3406-4a69-85f5-0806b992a625","Type":"ContainerDied","Data":"408b4cf086c11dfa44a43b478ab8bcff091c43a5e6b7d193bfc3684767005382"} Jan 31 04:03:57 crc kubenswrapper[4827]: I0131 04:03:57.621037 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=8.125509334 podStartE2EDuration="19.621016058s" podCreationTimestamp="2026-01-31 04:03:38 +0000 UTC" firstStartedPulling="2026-01-31 04:03:43.47321879 +0000 UTC m=+1016.160299239" lastFinishedPulling="2026-01-31 04:03:54.968725514 +0000 UTC m=+1027.655805963" observedRunningTime="2026-01-31 04:03:55.716291649 +0000 UTC m=+1028.403372118" watchObservedRunningTime="2026-01-31 04:03:57.621016058 +0000 UTC m=+1030.308096507" Jan 31 04:03:58 crc kubenswrapper[4827]: I0131 04:03:58.251270 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:58 crc kubenswrapper[4827]: I0131 04:03:58.277617 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:58 crc kubenswrapper[4827]: I0131 04:03:58.277675 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:58 crc kubenswrapper[4827]: I0131 04:03:58.285805 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:58 crc kubenswrapper[4827]: I0131 04:03:58.318522 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:58 crc kubenswrapper[4827]: I0131 04:03:58.600653 4827 generic.go:334] "Generic (PLEG): container finished" podID="a0d3d60f-16f2-469d-8314-9055bb91a9ce" containerID="4b9d0720291d8a3aca054f46ddb2fc8386b50ddf6d6c6fd9ff7622e9873faa41" exitCode=0 Jan 31 04:03:58 crc kubenswrapper[4827]: I0131 04:03:58.601726 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a0d3d60f-16f2-469d-8314-9055bb91a9ce","Type":"ContainerDied","Data":"4b9d0720291d8a3aca054f46ddb2fc8386b50ddf6d6c6fd9ff7622e9873faa41"} Jan 31 04:03:58 crc kubenswrapper[4827]: I0131 04:03:58.608770 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f66333b7-3406-4a69-85f5-0806b992a625","Type":"ContainerStarted","Data":"3b57453c0769f7cb87ca0c9ea0787d0efad968a2e8f085247d0f5a99aac16585"} Jan 31 04:03:58 crc kubenswrapper[4827]: I0131 04:03:58.609066 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:58 crc kubenswrapper[4827]: I0131 04:03:58.659137 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 31 04:03:58 crc kubenswrapper[4827]: I0131 04:03:58.661016 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=18.945999672 podStartE2EDuration="27.661001671s" podCreationTimestamp="2026-01-31 04:03:31 +0000 UTC" firstStartedPulling="2026-01-31 04:03:42.942813885 +0000 UTC m=+1015.629894344" lastFinishedPulling="2026-01-31 04:03:51.657815894 +0000 UTC m=+1024.344896343" observedRunningTime="2026-01-31 04:03:58.647855077 +0000 UTC m=+1031.334935546" watchObservedRunningTime="2026-01-31 04:03:58.661001671 +0000 UTC m=+1031.348082120" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.200046 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.615525 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a0d3d60f-16f2-469d-8314-9055bb91a9ce","Type":"ContainerStarted","Data":"455ab35041f9a3d8aa4f2c1a38a2d5709edbbc797e434f2ee79b58be9ed89da2"} Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.636077 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=19.39420899 podStartE2EDuration="27.63605474s" podCreationTimestamp="2026-01-31 04:03:32 +0000 UTC" firstStartedPulling="2026-01-31 04:03:43.194390749 +0000 UTC m=+1015.881471198" lastFinishedPulling="2026-01-31 04:03:51.436236479 +0000 UTC m=+1024.123316948" observedRunningTime="2026-01-31 04:03:59.635854644 +0000 UTC m=+1032.322935093" watchObservedRunningTime="2026-01-31 04:03:59.63605474 +0000 UTC m=+1032.323135189" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.657345 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.817000 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 31 04:03:59 crc kubenswrapper[4827]: E0131 04:03:59.817615 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac8573e8-0ab7-4f9b-a060-fcaf95d97e32" containerName="init" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.817632 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac8573e8-0ab7-4f9b-a060-fcaf95d97e32" containerName="init" Jan 31 04:03:59 crc kubenswrapper[4827]: E0131 04:03:59.817660 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ad1b54-a5b6-4464-823e-4bb474694618" containerName="dnsmasq-dns" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.817667 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ad1b54-a5b6-4464-823e-4bb474694618" containerName="dnsmasq-dns" Jan 31 04:03:59 crc kubenswrapper[4827]: E0131 04:03:59.817684 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ad1b54-a5b6-4464-823e-4bb474694618" containerName="init" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.817690 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ad1b54-a5b6-4464-823e-4bb474694618" containerName="init" Jan 31 04:03:59 crc kubenswrapper[4827]: E0131 04:03:59.817709 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac8573e8-0ab7-4f9b-a060-fcaf95d97e32" containerName="dnsmasq-dns" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.817717 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac8573e8-0ab7-4f9b-a060-fcaf95d97e32" containerName="dnsmasq-dns" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.817899 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ad1b54-a5b6-4464-823e-4bb474694618" containerName="dnsmasq-dns" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.817929 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac8573e8-0ab7-4f9b-a060-fcaf95d97e32" containerName="dnsmasq-dns" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.818767 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.826508 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-2b8gq" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.826646 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.829113 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.829225 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.834893 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.894662 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e08df3ea-bbcb-4a8e-9de0-39b86fa6672d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d\") " pod="openstack/ovn-northd-0" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.894720 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e08df3ea-bbcb-4a8e-9de0-39b86fa6672d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d\") " pod="openstack/ovn-northd-0" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.894750 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rx62\" (UniqueName: \"kubernetes.io/projected/e08df3ea-bbcb-4a8e-9de0-39b86fa6672d-kube-api-access-4rx62\") pod \"ovn-northd-0\" (UID: \"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d\") " pod="openstack/ovn-northd-0" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.895003 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08df3ea-bbcb-4a8e-9de0-39b86fa6672d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d\") " pod="openstack/ovn-northd-0" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.895065 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e08df3ea-bbcb-4a8e-9de0-39b86fa6672d-config\") pod \"ovn-northd-0\" (UID: \"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d\") " pod="openstack/ovn-northd-0" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.895131 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e08df3ea-bbcb-4a8e-9de0-39b86fa6672d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d\") " pod="openstack/ovn-northd-0" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.895154 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e08df3ea-bbcb-4a8e-9de0-39b86fa6672d-scripts\") pod \"ovn-northd-0\" (UID: \"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d\") " pod="openstack/ovn-northd-0" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.996853 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e08df3ea-bbcb-4a8e-9de0-39b86fa6672d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d\") " pod="openstack/ovn-northd-0" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.996952 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e08df3ea-bbcb-4a8e-9de0-39b86fa6672d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d\") " pod="openstack/ovn-northd-0" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.996985 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rx62\" (UniqueName: \"kubernetes.io/projected/e08df3ea-bbcb-4a8e-9de0-39b86fa6672d-kube-api-access-4rx62\") pod \"ovn-northd-0\" (UID: \"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d\") " pod="openstack/ovn-northd-0" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.997039 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08df3ea-bbcb-4a8e-9de0-39b86fa6672d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d\") " pod="openstack/ovn-northd-0" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.997065 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e08df3ea-bbcb-4a8e-9de0-39b86fa6672d-config\") pod \"ovn-northd-0\" (UID: \"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d\") " pod="openstack/ovn-northd-0" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.997102 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e08df3ea-bbcb-4a8e-9de0-39b86fa6672d-scripts\") pod \"ovn-northd-0\" (UID: \"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d\") " pod="openstack/ovn-northd-0" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.997124 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e08df3ea-bbcb-4a8e-9de0-39b86fa6672d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d\") " pod="openstack/ovn-northd-0" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.997664 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e08df3ea-bbcb-4a8e-9de0-39b86fa6672d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d\") " pod="openstack/ovn-northd-0" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.998317 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e08df3ea-bbcb-4a8e-9de0-39b86fa6672d-config\") pod \"ovn-northd-0\" (UID: \"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d\") " pod="openstack/ovn-northd-0" Jan 31 04:03:59 crc kubenswrapper[4827]: I0131 04:03:59.998336 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e08df3ea-bbcb-4a8e-9de0-39b86fa6672d-scripts\") pod \"ovn-northd-0\" (UID: \"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d\") " pod="openstack/ovn-northd-0" Jan 31 04:04:00 crc kubenswrapper[4827]: I0131 04:04:00.004065 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08df3ea-bbcb-4a8e-9de0-39b86fa6672d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d\") " pod="openstack/ovn-northd-0" Jan 31 04:04:00 crc kubenswrapper[4827]: I0131 04:04:00.011538 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e08df3ea-bbcb-4a8e-9de0-39b86fa6672d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d\") " pod="openstack/ovn-northd-0" Jan 31 04:04:00 crc kubenswrapper[4827]: I0131 04:04:00.014732 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e08df3ea-bbcb-4a8e-9de0-39b86fa6672d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d\") " pod="openstack/ovn-northd-0" Jan 31 04:04:00 crc kubenswrapper[4827]: I0131 04:04:00.016841 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rx62\" (UniqueName: \"kubernetes.io/projected/e08df3ea-bbcb-4a8e-9de0-39b86fa6672d-kube-api-access-4rx62\") pod \"ovn-northd-0\" (UID: \"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d\") " pod="openstack/ovn-northd-0" Jan 31 04:04:00 crc kubenswrapper[4827]: I0131 04:04:00.067077 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" Jan 31 04:04:00 crc kubenswrapper[4827]: I0131 04:04:00.094276 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" Jan 31 04:04:00 crc kubenswrapper[4827]: I0131 04:04:00.144305 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 31 04:04:00 crc kubenswrapper[4827]: I0131 04:04:00.194301 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-d9tcc"] Jan 31 04:04:00 crc kubenswrapper[4827]: I0131 04:04:00.632787 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" podUID="2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7" containerName="dnsmasq-dns" containerID="cri-o://c0b5c45b7df090aabfcd8fdbd4bf522edad03bc6170b82d7b50cc08bfd95b626" gracePeriod=10 Jan 31 04:04:00 crc kubenswrapper[4827]: I0131 04:04:00.705164 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 31 04:04:00 crc kubenswrapper[4827]: W0131 04:04:00.730048 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode08df3ea_bbcb_4a8e_9de0_39b86fa6672d.slice/crio-d771936c32d48977146a8d57a0546896c9a555fc0d2d0d226a254cdded618cbc WatchSource:0}: Error finding container d771936c32d48977146a8d57a0546896c9a555fc0d2d0d226a254cdded618cbc: Status 404 returned error can't find the container with id d771936c32d48977146a8d57a0546896c9a555fc0d2d0d226a254cdded618cbc Jan 31 04:04:00 crc kubenswrapper[4827]: I0131 04:04:00.981598 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.116088 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-dns-svc\") pod \"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7\" (UID: \"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7\") " Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.116175 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxw8h\" (UniqueName: \"kubernetes.io/projected/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-kube-api-access-hxw8h\") pod \"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7\" (UID: \"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7\") " Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.116251 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-ovsdbserver-sb\") pod \"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7\" (UID: \"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7\") " Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.116288 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-config\") pod \"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7\" (UID: \"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7\") " Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.122955 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-kube-api-access-hxw8h" (OuterVolumeSpecName: "kube-api-access-hxw8h") pod "2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7" (UID: "2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7"). InnerVolumeSpecName "kube-api-access-hxw8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.151994 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-config" (OuterVolumeSpecName: "config") pod "2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7" (UID: "2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.152328 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7" (UID: "2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.153191 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7" (UID: "2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.217943 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.218012 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.218025 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.218036 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxw8h\" (UniqueName: \"kubernetes.io/projected/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7-kube-api-access-hxw8h\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.640805 4827 generic.go:334] "Generic (PLEG): container finished" podID="2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7" containerID="c0b5c45b7df090aabfcd8fdbd4bf522edad03bc6170b82d7b50cc08bfd95b626" exitCode=0 Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.640859 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.640901 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" event={"ID":"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7","Type":"ContainerDied","Data":"c0b5c45b7df090aabfcd8fdbd4bf522edad03bc6170b82d7b50cc08bfd95b626"} Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.641321 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-d9tcc" event={"ID":"2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7","Type":"ContainerDied","Data":"fdc4f1bd7cab304ed17ae5ee9c4c44df655ad421d6647b96bbbfa231baf9c207"} Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.641342 4827 scope.go:117] "RemoveContainer" containerID="c0b5c45b7df090aabfcd8fdbd4bf522edad03bc6170b82d7b50cc08bfd95b626" Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.645580 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d","Type":"ContainerStarted","Data":"d771936c32d48977146a8d57a0546896c9a555fc0d2d0d226a254cdded618cbc"} Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.671272 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-d9tcc"] Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.678541 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-d9tcc"] Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.722135 4827 scope.go:117] "RemoveContainer" containerID="33515604c1dd1b56ca4d3339c3d998b428abb27951ae28b7cabed4c64998e50a" Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.741200 4827 scope.go:117] "RemoveContainer" containerID="c0b5c45b7df090aabfcd8fdbd4bf522edad03bc6170b82d7b50cc08bfd95b626" Jan 31 04:04:01 crc kubenswrapper[4827]: E0131 04:04:01.741782 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0b5c45b7df090aabfcd8fdbd4bf522edad03bc6170b82d7b50cc08bfd95b626\": container with ID starting with c0b5c45b7df090aabfcd8fdbd4bf522edad03bc6170b82d7b50cc08bfd95b626 not found: ID does not exist" containerID="c0b5c45b7df090aabfcd8fdbd4bf522edad03bc6170b82d7b50cc08bfd95b626" Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.741821 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0b5c45b7df090aabfcd8fdbd4bf522edad03bc6170b82d7b50cc08bfd95b626"} err="failed to get container status \"c0b5c45b7df090aabfcd8fdbd4bf522edad03bc6170b82d7b50cc08bfd95b626\": rpc error: code = NotFound desc = could not find container \"c0b5c45b7df090aabfcd8fdbd4bf522edad03bc6170b82d7b50cc08bfd95b626\": container with ID starting with c0b5c45b7df090aabfcd8fdbd4bf522edad03bc6170b82d7b50cc08bfd95b626 not found: ID does not exist" Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.741850 4827 scope.go:117] "RemoveContainer" containerID="33515604c1dd1b56ca4d3339c3d998b428abb27951ae28b7cabed4c64998e50a" Jan 31 04:04:01 crc kubenswrapper[4827]: E0131 04:04:01.742149 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33515604c1dd1b56ca4d3339c3d998b428abb27951ae28b7cabed4c64998e50a\": container with ID starting with 33515604c1dd1b56ca4d3339c3d998b428abb27951ae28b7cabed4c64998e50a not found: ID does not exist" containerID="33515604c1dd1b56ca4d3339c3d998b428abb27951ae28b7cabed4c64998e50a" Jan 31 04:04:01 crc kubenswrapper[4827]: I0131 04:04:01.742176 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33515604c1dd1b56ca4d3339c3d998b428abb27951ae28b7cabed4c64998e50a"} err="failed to get container status \"33515604c1dd1b56ca4d3339c3d998b428abb27951ae28b7cabed4c64998e50a\": rpc error: code = NotFound desc = could not find container \"33515604c1dd1b56ca4d3339c3d998b428abb27951ae28b7cabed4c64998e50a\": container with ID starting with 33515604c1dd1b56ca4d3339c3d998b428abb27951ae28b7cabed4c64998e50a not found: ID does not exist" Jan 31 04:04:02 crc kubenswrapper[4827]: I0131 04:04:02.119648 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7" path="/var/lib/kubelet/pods/2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7/volumes" Jan 31 04:04:02 crc kubenswrapper[4827]: E0131 04:04:02.362487 4827 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.80:53060->38.102.83.80:42075: write tcp 38.102.83.80:53060->38.102.83.80:42075: write: connection reset by peer Jan 31 04:04:02 crc kubenswrapper[4827]: I0131 04:04:02.521133 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 31 04:04:02 crc kubenswrapper[4827]: I0131 04:04:02.521474 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 31 04:04:02 crc kubenswrapper[4827]: I0131 04:04:02.603845 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 31 04:04:02 crc kubenswrapper[4827]: I0131 04:04:02.665649 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d","Type":"ContainerStarted","Data":"3b4afbf42d70f85633139ad6046f041c27474591f56204551b0d773e440e0bcf"} Jan 31 04:04:02 crc kubenswrapper[4827]: I0131 04:04:02.665706 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e08df3ea-bbcb-4a8e-9de0-39b86fa6672d","Type":"ContainerStarted","Data":"9ced24090cdb32a624b379ca661e04571232afa1d2c5783bbb17ffac1514a972"} Jan 31 04:04:02 crc kubenswrapper[4827]: I0131 04:04:02.665803 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 31 04:04:02 crc kubenswrapper[4827]: I0131 04:04:02.693617 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.650355501 podStartE2EDuration="3.693602322s" podCreationTimestamp="2026-01-31 04:03:59 +0000 UTC" firstStartedPulling="2026-01-31 04:04:00.732393311 +0000 UTC m=+1033.419473760" lastFinishedPulling="2026-01-31 04:04:01.775640132 +0000 UTC m=+1034.462720581" observedRunningTime="2026-01-31 04:04:02.688304089 +0000 UTC m=+1035.375384548" watchObservedRunningTime="2026-01-31 04:04:02.693602322 +0000 UTC m=+1035.380682771" Jan 31 04:04:02 crc kubenswrapper[4827]: I0131 04:04:02.736084 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 31 04:04:03 crc kubenswrapper[4827]: I0131 04:04:03.856533 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d367-account-create-update-9kflk"] Jan 31 04:04:03 crc kubenswrapper[4827]: E0131 04:04:03.856822 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7" containerName="dnsmasq-dns" Jan 31 04:04:03 crc kubenswrapper[4827]: I0131 04:04:03.856833 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7" containerName="dnsmasq-dns" Jan 31 04:04:03 crc kubenswrapper[4827]: E0131 04:04:03.856851 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7" containerName="init" Jan 31 04:04:03 crc kubenswrapper[4827]: I0131 04:04:03.856856 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7" containerName="init" Jan 31 04:04:03 crc kubenswrapper[4827]: I0131 04:04:03.857029 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="2240f9c7-2ffe-44cb-9a4b-e350bdd3bfa7" containerName="dnsmasq-dns" Jan 31 04:04:03 crc kubenswrapper[4827]: I0131 04:04:03.857524 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d367-account-create-update-9kflk" Jan 31 04:04:03 crc kubenswrapper[4827]: I0131 04:04:03.859645 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 31 04:04:03 crc kubenswrapper[4827]: I0131 04:04:03.868820 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d367-account-create-update-9kflk"] Jan 31 04:04:03 crc kubenswrapper[4827]: I0131 04:04:03.889971 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:03 crc kubenswrapper[4827]: I0131 04:04:03.889995 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:03 crc kubenswrapper[4827]: I0131 04:04:03.945262 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-k79vw"] Jan 31 04:04:03 crc kubenswrapper[4827]: I0131 04:04:03.946602 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-k79vw" Jan 31 04:04:03 crc kubenswrapper[4827]: I0131 04:04:03.948526 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-k79vw"] Jan 31 04:04:03 crc kubenswrapper[4827]: I0131 04:04:03.962691 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a102ee-af73-49b8-8c30-094871ea6ae8-operator-scripts\") pod \"keystone-d367-account-create-update-9kflk\" (UID: \"08a102ee-af73-49b8-8c30-094871ea6ae8\") " pod="openstack/keystone-d367-account-create-update-9kflk" Jan 31 04:04:03 crc kubenswrapper[4827]: I0131 04:04:03.963329 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcbmw\" (UniqueName: \"kubernetes.io/projected/08a102ee-af73-49b8-8c30-094871ea6ae8-kube-api-access-pcbmw\") pod \"keystone-d367-account-create-update-9kflk\" (UID: \"08a102ee-af73-49b8-8c30-094871ea6ae8\") " pod="openstack/keystone-d367-account-create-update-9kflk" Jan 31 04:04:03 crc kubenswrapper[4827]: I0131 04:04:03.970251 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.067945 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcbmw\" (UniqueName: \"kubernetes.io/projected/08a102ee-af73-49b8-8c30-094871ea6ae8-kube-api-access-pcbmw\") pod \"keystone-d367-account-create-update-9kflk\" (UID: \"08a102ee-af73-49b8-8c30-094871ea6ae8\") " pod="openstack/keystone-d367-account-create-update-9kflk" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.068021 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e737ec5b-5af1-4082-86b7-f6571ce8bd36-operator-scripts\") pod \"keystone-db-create-k79vw\" (UID: \"e737ec5b-5af1-4082-86b7-f6571ce8bd36\") " pod="openstack/keystone-db-create-k79vw" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.068048 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a102ee-af73-49b8-8c30-094871ea6ae8-operator-scripts\") pod \"keystone-d367-account-create-update-9kflk\" (UID: \"08a102ee-af73-49b8-8c30-094871ea6ae8\") " pod="openstack/keystone-d367-account-create-update-9kflk" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.068122 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdpwr\" (UniqueName: \"kubernetes.io/projected/e737ec5b-5af1-4082-86b7-f6571ce8bd36-kube-api-access-tdpwr\") pod \"keystone-db-create-k79vw\" (UID: \"e737ec5b-5af1-4082-86b7-f6571ce8bd36\") " pod="openstack/keystone-db-create-k79vw" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.070246 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a102ee-af73-49b8-8c30-094871ea6ae8-operator-scripts\") pod \"keystone-d367-account-create-update-9kflk\" (UID: \"08a102ee-af73-49b8-8c30-094871ea6ae8\") " pod="openstack/keystone-d367-account-create-update-9kflk" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.092404 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcbmw\" (UniqueName: \"kubernetes.io/projected/08a102ee-af73-49b8-8c30-094871ea6ae8-kube-api-access-pcbmw\") pod \"keystone-d367-account-create-update-9kflk\" (UID: \"08a102ee-af73-49b8-8c30-094871ea6ae8\") " pod="openstack/keystone-d367-account-create-update-9kflk" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.094647 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-st2vd"] Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.095930 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-st2vd" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.101211 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-st2vd"] Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.170264 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kh7t\" (UniqueName: \"kubernetes.io/projected/d641d15b-7085-430a-9adb-69c0e94d52e3-kube-api-access-8kh7t\") pod \"placement-db-create-st2vd\" (UID: \"d641d15b-7085-430a-9adb-69c0e94d52e3\") " pod="openstack/placement-db-create-st2vd" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.170401 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdpwr\" (UniqueName: \"kubernetes.io/projected/e737ec5b-5af1-4082-86b7-f6571ce8bd36-kube-api-access-tdpwr\") pod \"keystone-db-create-k79vw\" (UID: \"e737ec5b-5af1-4082-86b7-f6571ce8bd36\") " pod="openstack/keystone-db-create-k79vw" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.173999 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d641d15b-7085-430a-9adb-69c0e94d52e3-operator-scripts\") pod \"placement-db-create-st2vd\" (UID: \"d641d15b-7085-430a-9adb-69c0e94d52e3\") " pod="openstack/placement-db-create-st2vd" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.174172 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e737ec5b-5af1-4082-86b7-f6571ce8bd36-operator-scripts\") pod \"keystone-db-create-k79vw\" (UID: \"e737ec5b-5af1-4082-86b7-f6571ce8bd36\") " pod="openstack/keystone-db-create-k79vw" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.175148 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e737ec5b-5af1-4082-86b7-f6571ce8bd36-operator-scripts\") pod \"keystone-db-create-k79vw\" (UID: \"e737ec5b-5af1-4082-86b7-f6571ce8bd36\") " pod="openstack/keystone-db-create-k79vw" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.193782 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5b1a-account-create-update-2dfbx"] Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.195116 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b1a-account-create-update-2dfbx" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.195927 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdpwr\" (UniqueName: \"kubernetes.io/projected/e737ec5b-5af1-4082-86b7-f6571ce8bd36-kube-api-access-tdpwr\") pod \"keystone-db-create-k79vw\" (UID: \"e737ec5b-5af1-4082-86b7-f6571ce8bd36\") " pod="openstack/keystone-db-create-k79vw" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.197439 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.202350 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5b1a-account-create-update-2dfbx"] Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.205237 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d367-account-create-update-9kflk" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.267130 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-k79vw" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.275497 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kh7t\" (UniqueName: \"kubernetes.io/projected/d641d15b-7085-430a-9adb-69c0e94d52e3-kube-api-access-8kh7t\") pod \"placement-db-create-st2vd\" (UID: \"d641d15b-7085-430a-9adb-69c0e94d52e3\") " pod="openstack/placement-db-create-st2vd" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.275592 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2d19d1f-3276-4409-8071-ccdac4eb4e6c-operator-scripts\") pod \"placement-5b1a-account-create-update-2dfbx\" (UID: \"f2d19d1f-3276-4409-8071-ccdac4eb4e6c\") " pod="openstack/placement-5b1a-account-create-update-2dfbx" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.275676 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lllf4\" (UniqueName: \"kubernetes.io/projected/f2d19d1f-3276-4409-8071-ccdac4eb4e6c-kube-api-access-lllf4\") pod \"placement-5b1a-account-create-update-2dfbx\" (UID: \"f2d19d1f-3276-4409-8071-ccdac4eb4e6c\") " pod="openstack/placement-5b1a-account-create-update-2dfbx" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.275714 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d641d15b-7085-430a-9adb-69c0e94d52e3-operator-scripts\") pod \"placement-db-create-st2vd\" (UID: \"d641d15b-7085-430a-9adb-69c0e94d52e3\") " pod="openstack/placement-db-create-st2vd" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.276397 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d641d15b-7085-430a-9adb-69c0e94d52e3-operator-scripts\") pod \"placement-db-create-st2vd\" (UID: \"d641d15b-7085-430a-9adb-69c0e94d52e3\") " pod="openstack/placement-db-create-st2vd" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.301867 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kh7t\" (UniqueName: \"kubernetes.io/projected/d641d15b-7085-430a-9adb-69c0e94d52e3-kube-api-access-8kh7t\") pod \"placement-db-create-st2vd\" (UID: \"d641d15b-7085-430a-9adb-69c0e94d52e3\") " pod="openstack/placement-db-create-st2vd" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.377959 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2d19d1f-3276-4409-8071-ccdac4eb4e6c-operator-scripts\") pod \"placement-5b1a-account-create-update-2dfbx\" (UID: \"f2d19d1f-3276-4409-8071-ccdac4eb4e6c\") " pod="openstack/placement-5b1a-account-create-update-2dfbx" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.378042 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lllf4\" (UniqueName: \"kubernetes.io/projected/f2d19d1f-3276-4409-8071-ccdac4eb4e6c-kube-api-access-lllf4\") pod \"placement-5b1a-account-create-update-2dfbx\" (UID: \"f2d19d1f-3276-4409-8071-ccdac4eb4e6c\") " pod="openstack/placement-5b1a-account-create-update-2dfbx" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.382156 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2d19d1f-3276-4409-8071-ccdac4eb4e6c-operator-scripts\") pod \"placement-5b1a-account-create-update-2dfbx\" (UID: \"f2d19d1f-3276-4409-8071-ccdac4eb4e6c\") " pod="openstack/placement-5b1a-account-create-update-2dfbx" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.401291 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lllf4\" (UniqueName: \"kubernetes.io/projected/f2d19d1f-3276-4409-8071-ccdac4eb4e6c-kube-api-access-lllf4\") pod \"placement-5b1a-account-create-update-2dfbx\" (UID: \"f2d19d1f-3276-4409-8071-ccdac4eb4e6c\") " pod="openstack/placement-5b1a-account-create-update-2dfbx" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.446888 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-st2vd" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.627577 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b1a-account-create-update-2dfbx" Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.669990 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d367-account-create-update-9kflk"] Jan 31 04:04:04 crc kubenswrapper[4827]: W0131 04:04:04.692777 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08a102ee_af73_49b8_8c30_094871ea6ae8.slice/crio-20011366ac5802fc4961f8bdcc28c914057fe1a7e266c288d2fc6b7be5a09bd2 WatchSource:0}: Error finding container 20011366ac5802fc4961f8bdcc28c914057fe1a7e266c288d2fc6b7be5a09bd2: Status 404 returned error can't find the container with id 20011366ac5802fc4961f8bdcc28c914057fe1a7e266c288d2fc6b7be5a09bd2 Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.700999 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-st2vd"] Jan 31 04:04:04 crc kubenswrapper[4827]: W0131 04:04:04.702073 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd641d15b_7085_430a_9adb_69c0e94d52e3.slice/crio-17c37b065f1b3ea28867a393d793aa95be6990ed8585f7cd762e5d8963ae89ba WatchSource:0}: Error finding container 17c37b065f1b3ea28867a393d793aa95be6990ed8585f7cd762e5d8963ae89ba: Status 404 returned error can't find the container with id 17c37b065f1b3ea28867a393d793aa95be6990ed8585f7cd762e5d8963ae89ba Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.745712 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-k79vw"] Jan 31 04:04:04 crc kubenswrapper[4827]: I0131 04:04:04.794073 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:05 crc kubenswrapper[4827]: I0131 04:04:05.105252 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5b1a-account-create-update-2dfbx"] Jan 31 04:04:05 crc kubenswrapper[4827]: I0131 04:04:05.706602 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-st2vd" event={"ID":"d641d15b-7085-430a-9adb-69c0e94d52e3","Type":"ContainerStarted","Data":"17c37b065f1b3ea28867a393d793aa95be6990ed8585f7cd762e5d8963ae89ba"} Jan 31 04:04:05 crc kubenswrapper[4827]: I0131 04:04:05.709490 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-k79vw" event={"ID":"e737ec5b-5af1-4082-86b7-f6571ce8bd36","Type":"ContainerStarted","Data":"93fd163604cc4e9e9df580118b80c79ec45a24548b5ae3f42cdbf7989562f66a"} Jan 31 04:04:05 crc kubenswrapper[4827]: I0131 04:04:05.710739 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d367-account-create-update-9kflk" event={"ID":"08a102ee-af73-49b8-8c30-094871ea6ae8","Type":"ContainerStarted","Data":"20011366ac5802fc4961f8bdcc28c914057fe1a7e266c288d2fc6b7be5a09bd2"} Jan 31 04:04:05 crc kubenswrapper[4827]: I0131 04:04:05.714216 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b1a-account-create-update-2dfbx" event={"ID":"f2d19d1f-3276-4409-8071-ccdac4eb4e6c","Type":"ContainerStarted","Data":"40167229db6e03a179f35c0a389cbf0a7e6f60e1321d0b946693bf4cd33ecf2e"} Jan 31 04:04:06 crc kubenswrapper[4827]: I0131 04:04:06.336782 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.321039 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-787wv"] Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.323513 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-787wv" Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.330392 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-787wv"] Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.350033 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqrlc\" (UniqueName: \"kubernetes.io/projected/8e05cffc-7368-4346-9b36-fbe0c99c2397-kube-api-access-fqrlc\") pod \"glance-db-create-787wv\" (UID: \"8e05cffc-7368-4346-9b36-fbe0c99c2397\") " pod="openstack/glance-db-create-787wv" Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.350090 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e05cffc-7368-4346-9b36-fbe0c99c2397-operator-scripts\") pod \"glance-db-create-787wv\" (UID: \"8e05cffc-7368-4346-9b36-fbe0c99c2397\") " pod="openstack/glance-db-create-787wv" Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.446292 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ff35-account-create-update-bcgxx"] Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.448752 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ff35-account-create-update-bcgxx" Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.451487 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b95b46e8-a0b5-4916-a724-41f7f25f0cd3-operator-scripts\") pod \"glance-ff35-account-create-update-bcgxx\" (UID: \"b95b46e8-a0b5-4916-a724-41f7f25f0cd3\") " pod="openstack/glance-ff35-account-create-update-bcgxx" Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.451576 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqrlc\" (UniqueName: \"kubernetes.io/projected/8e05cffc-7368-4346-9b36-fbe0c99c2397-kube-api-access-fqrlc\") pod \"glance-db-create-787wv\" (UID: \"8e05cffc-7368-4346-9b36-fbe0c99c2397\") " pod="openstack/glance-db-create-787wv" Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.451631 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9t4d\" (UniqueName: \"kubernetes.io/projected/b95b46e8-a0b5-4916-a724-41f7f25f0cd3-kube-api-access-c9t4d\") pod \"glance-ff35-account-create-update-bcgxx\" (UID: \"b95b46e8-a0b5-4916-a724-41f7f25f0cd3\") " pod="openstack/glance-ff35-account-create-update-bcgxx" Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.451680 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e05cffc-7368-4346-9b36-fbe0c99c2397-operator-scripts\") pod \"glance-db-create-787wv\" (UID: \"8e05cffc-7368-4346-9b36-fbe0c99c2397\") " pod="openstack/glance-db-create-787wv" Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.452687 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e05cffc-7368-4346-9b36-fbe0c99c2397-operator-scripts\") pod \"glance-db-create-787wv\" (UID: \"8e05cffc-7368-4346-9b36-fbe0c99c2397\") " pod="openstack/glance-db-create-787wv" Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.452847 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.453665 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ff35-account-create-update-bcgxx"] Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.475513 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqrlc\" (UniqueName: \"kubernetes.io/projected/8e05cffc-7368-4346-9b36-fbe0c99c2397-kube-api-access-fqrlc\") pod \"glance-db-create-787wv\" (UID: \"8e05cffc-7368-4346-9b36-fbe0c99c2397\") " pod="openstack/glance-db-create-787wv" Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.553546 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b95b46e8-a0b5-4916-a724-41f7f25f0cd3-operator-scripts\") pod \"glance-ff35-account-create-update-bcgxx\" (UID: \"b95b46e8-a0b5-4916-a724-41f7f25f0cd3\") " pod="openstack/glance-ff35-account-create-update-bcgxx" Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.553644 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9t4d\" (UniqueName: \"kubernetes.io/projected/b95b46e8-a0b5-4916-a724-41f7f25f0cd3-kube-api-access-c9t4d\") pod \"glance-ff35-account-create-update-bcgxx\" (UID: \"b95b46e8-a0b5-4916-a724-41f7f25f0cd3\") " pod="openstack/glance-ff35-account-create-update-bcgxx" Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.554342 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b95b46e8-a0b5-4916-a724-41f7f25f0cd3-operator-scripts\") pod \"glance-ff35-account-create-update-bcgxx\" (UID: \"b95b46e8-a0b5-4916-a724-41f7f25f0cd3\") " pod="openstack/glance-ff35-account-create-update-bcgxx" Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.571110 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9t4d\" (UniqueName: \"kubernetes.io/projected/b95b46e8-a0b5-4916-a724-41f7f25f0cd3-kube-api-access-c9t4d\") pod \"glance-ff35-account-create-update-bcgxx\" (UID: \"b95b46e8-a0b5-4916-a724-41f7f25f0cd3\") " pod="openstack/glance-ff35-account-create-update-bcgxx" Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.644583 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-787wv" Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.771204 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ff35-account-create-update-bcgxx" Jan 31 04:04:09 crc kubenswrapper[4827]: I0131 04:04:09.977145 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ff35-account-create-update-bcgxx"] Jan 31 04:04:09 crc kubenswrapper[4827]: W0131 04:04:09.983228 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb95b46e8_a0b5_4916_a724_41f7f25f0cd3.slice/crio-252a2f5f09e8d6c82217dfa71981f19ea4de98f13d819a22ff9a3a70a61b0016 WatchSource:0}: Error finding container 252a2f5f09e8d6c82217dfa71981f19ea4de98f13d819a22ff9a3a70a61b0016: Status 404 returned error can't find the container with id 252a2f5f09e8d6c82217dfa71981f19ea4de98f13d819a22ff9a3a70a61b0016 Jan 31 04:04:10 crc kubenswrapper[4827]: I0131 04:04:10.043262 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-787wv"] Jan 31 04:04:10 crc kubenswrapper[4827]: W0131 04:04:10.048892 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e05cffc_7368_4346_9b36_fbe0c99c2397.slice/crio-9d3176c2eb6f6c2b1c09a96f60e6dfb230bca6f766dc328381d12a5c2d6457f2 WatchSource:0}: Error finding container 9d3176c2eb6f6c2b1c09a96f60e6dfb230bca6f766dc328381d12a5c2d6457f2: Status 404 returned error can't find the container with id 9d3176c2eb6f6c2b1c09a96f60e6dfb230bca6f766dc328381d12a5c2d6457f2 Jan 31 04:04:10 crc kubenswrapper[4827]: I0131 04:04:10.750619 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d367-account-create-update-9kflk" event={"ID":"08a102ee-af73-49b8-8c30-094871ea6ae8","Type":"ContainerStarted","Data":"0a79feeb727ae99032b7fc3b9e16586a3ef594f1b2389c6e0cd409f292a5c7f4"} Jan 31 04:04:10 crc kubenswrapper[4827]: I0131 04:04:10.751770 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-787wv" event={"ID":"8e05cffc-7368-4346-9b36-fbe0c99c2397","Type":"ContainerStarted","Data":"9d3176c2eb6f6c2b1c09a96f60e6dfb230bca6f766dc328381d12a5c2d6457f2"} Jan 31 04:04:10 crc kubenswrapper[4827]: I0131 04:04:10.753630 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ff35-account-create-update-bcgxx" event={"ID":"b95b46e8-a0b5-4916-a724-41f7f25f0cd3","Type":"ContainerStarted","Data":"252a2f5f09e8d6c82217dfa71981f19ea4de98f13d819a22ff9a3a70a61b0016"} Jan 31 04:04:10 crc kubenswrapper[4827]: I0131 04:04:10.755428 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-st2vd" event={"ID":"d641d15b-7085-430a-9adb-69c0e94d52e3","Type":"ContainerStarted","Data":"e0b19c9fa7ed0479190b4ce86c1c9fbafd19b330aed0fdca7cc4835ac2a362d5"} Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.147964 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hk2cj"] Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.149264 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hk2cj" Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.151822 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.154969 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hk2cj"] Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.186553 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhhpw\" (UniqueName: \"kubernetes.io/projected/f8fe016d-b5e1-49a0-8f21-583a3fcbfe26-kube-api-access-jhhpw\") pod \"root-account-create-update-hk2cj\" (UID: \"f8fe016d-b5e1-49a0-8f21-583a3fcbfe26\") " pod="openstack/root-account-create-update-hk2cj" Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.186678 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8fe016d-b5e1-49a0-8f21-583a3fcbfe26-operator-scripts\") pod \"root-account-create-update-hk2cj\" (UID: \"f8fe016d-b5e1-49a0-8f21-583a3fcbfe26\") " pod="openstack/root-account-create-update-hk2cj" Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.288106 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhhpw\" (UniqueName: \"kubernetes.io/projected/f8fe016d-b5e1-49a0-8f21-583a3fcbfe26-kube-api-access-jhhpw\") pod \"root-account-create-update-hk2cj\" (UID: \"f8fe016d-b5e1-49a0-8f21-583a3fcbfe26\") " pod="openstack/root-account-create-update-hk2cj" Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.288255 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8fe016d-b5e1-49a0-8f21-583a3fcbfe26-operator-scripts\") pod \"root-account-create-update-hk2cj\" (UID: \"f8fe016d-b5e1-49a0-8f21-583a3fcbfe26\") " pod="openstack/root-account-create-update-hk2cj" Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.289055 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8fe016d-b5e1-49a0-8f21-583a3fcbfe26-operator-scripts\") pod \"root-account-create-update-hk2cj\" (UID: \"f8fe016d-b5e1-49a0-8f21-583a3fcbfe26\") " pod="openstack/root-account-create-update-hk2cj" Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.306565 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhhpw\" (UniqueName: \"kubernetes.io/projected/f8fe016d-b5e1-49a0-8f21-583a3fcbfe26-kube-api-access-jhhpw\") pod \"root-account-create-update-hk2cj\" (UID: \"f8fe016d-b5e1-49a0-8f21-583a3fcbfe26\") " pod="openstack/root-account-create-update-hk2cj" Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.468264 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hk2cj" Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.763026 4827 generic.go:334] "Generic (PLEG): container finished" podID="b95b46e8-a0b5-4916-a724-41f7f25f0cd3" containerID="c1a8c1dfd49df110b86e5b5f437cb6c87b475cdd130fa713a2e2b1edcb64dfb6" exitCode=0 Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.763082 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ff35-account-create-update-bcgxx" event={"ID":"b95b46e8-a0b5-4916-a724-41f7f25f0cd3","Type":"ContainerDied","Data":"c1a8c1dfd49df110b86e5b5f437cb6c87b475cdd130fa713a2e2b1edcb64dfb6"} Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.765917 4827 generic.go:334] "Generic (PLEG): container finished" podID="d641d15b-7085-430a-9adb-69c0e94d52e3" containerID="e0b19c9fa7ed0479190b4ce86c1c9fbafd19b330aed0fdca7cc4835ac2a362d5" exitCode=0 Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.765983 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-st2vd" event={"ID":"d641d15b-7085-430a-9adb-69c0e94d52e3","Type":"ContainerDied","Data":"e0b19c9fa7ed0479190b4ce86c1c9fbafd19b330aed0fdca7cc4835ac2a362d5"} Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.767716 4827 generic.go:334] "Generic (PLEG): container finished" podID="e737ec5b-5af1-4082-86b7-f6571ce8bd36" containerID="d610e0ad6beb400271025f49fff576e77fa1ce0ac6b5d971fe67ee33e8800f43" exitCode=0 Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.767767 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-k79vw" event={"ID":"e737ec5b-5af1-4082-86b7-f6571ce8bd36","Type":"ContainerDied","Data":"d610e0ad6beb400271025f49fff576e77fa1ce0ac6b5d971fe67ee33e8800f43"} Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.769364 4827 generic.go:334] "Generic (PLEG): container finished" podID="08a102ee-af73-49b8-8c30-094871ea6ae8" containerID="0a79feeb727ae99032b7fc3b9e16586a3ef594f1b2389c6e0cd409f292a5c7f4" exitCode=0 Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.769456 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d367-account-create-update-9kflk" event={"ID":"08a102ee-af73-49b8-8c30-094871ea6ae8","Type":"ContainerDied","Data":"0a79feeb727ae99032b7fc3b9e16586a3ef594f1b2389c6e0cd409f292a5c7f4"} Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.771147 4827 generic.go:334] "Generic (PLEG): container finished" podID="8e05cffc-7368-4346-9b36-fbe0c99c2397" containerID="bb3f3f878a6228d0abf9de95bac3285472ae598fceb10d256bbb1004ef2408f8" exitCode=0 Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.771201 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-787wv" event={"ID":"8e05cffc-7368-4346-9b36-fbe0c99c2397","Type":"ContainerDied","Data":"bb3f3f878a6228d0abf9de95bac3285472ae598fceb10d256bbb1004ef2408f8"} Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.773340 4827 generic.go:334] "Generic (PLEG): container finished" podID="f2d19d1f-3276-4409-8071-ccdac4eb4e6c" containerID="aa0eb20ba16e9776433901ea2f6918809c86f50adc9a419017677f88edcd976c" exitCode=0 Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.773423 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b1a-account-create-update-2dfbx" event={"ID":"f2d19d1f-3276-4409-8071-ccdac4eb4e6c","Type":"ContainerDied","Data":"aa0eb20ba16e9776433901ea2f6918809c86f50adc9a419017677f88edcd976c"} Jan 31 04:04:11 crc kubenswrapper[4827]: I0131 04:04:11.881478 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hk2cj"] Jan 31 04:04:11 crc kubenswrapper[4827]: W0131 04:04:11.885473 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8fe016d_b5e1_49a0_8f21_583a3fcbfe26.slice/crio-720ceefb33ad004b2d27c53f4fb31fad5f7340573a111ebee5717be5c9f76dad WatchSource:0}: Error finding container 720ceefb33ad004b2d27c53f4fb31fad5f7340573a111ebee5717be5c9f76dad: Status 404 returned error can't find the container with id 720ceefb33ad004b2d27c53f4fb31fad5f7340573a111ebee5717be5c9f76dad Jan 31 04:04:12 crc kubenswrapper[4827]: I0131 04:04:12.781672 4827 generic.go:334] "Generic (PLEG): container finished" podID="f8fe016d-b5e1-49a0-8f21-583a3fcbfe26" containerID="eaca7c5fe002a871bef4d3e604059e1cd3aea64dd58015175cd887eb1497b35c" exitCode=0 Jan 31 04:04:12 crc kubenswrapper[4827]: I0131 04:04:12.781816 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hk2cj" event={"ID":"f8fe016d-b5e1-49a0-8f21-583a3fcbfe26","Type":"ContainerDied","Data":"eaca7c5fe002a871bef4d3e604059e1cd3aea64dd58015175cd887eb1497b35c"} Jan 31 04:04:12 crc kubenswrapper[4827]: I0131 04:04:12.781863 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hk2cj" event={"ID":"f8fe016d-b5e1-49a0-8f21-583a3fcbfe26","Type":"ContainerStarted","Data":"720ceefb33ad004b2d27c53f4fb31fad5f7340573a111ebee5717be5c9f76dad"} Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.322269 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b1a-account-create-update-2dfbx" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.323272 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d367-account-create-update-9kflk" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.329640 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-k79vw" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.381712 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-st2vd" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.463040 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-787wv" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.469830 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ff35-account-create-update-bcgxx" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.520510 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2d19d1f-3276-4409-8071-ccdac4eb4e6c-operator-scripts\") pod \"f2d19d1f-3276-4409-8071-ccdac4eb4e6c\" (UID: \"f2d19d1f-3276-4409-8071-ccdac4eb4e6c\") " Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.520594 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e737ec5b-5af1-4082-86b7-f6571ce8bd36-operator-scripts\") pod \"e737ec5b-5af1-4082-86b7-f6571ce8bd36\" (UID: \"e737ec5b-5af1-4082-86b7-f6571ce8bd36\") " Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.520637 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d641d15b-7085-430a-9adb-69c0e94d52e3-operator-scripts\") pod \"d641d15b-7085-430a-9adb-69c0e94d52e3\" (UID: \"d641d15b-7085-430a-9adb-69c0e94d52e3\") " Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.520696 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdpwr\" (UniqueName: \"kubernetes.io/projected/e737ec5b-5af1-4082-86b7-f6571ce8bd36-kube-api-access-tdpwr\") pod \"e737ec5b-5af1-4082-86b7-f6571ce8bd36\" (UID: \"e737ec5b-5af1-4082-86b7-f6571ce8bd36\") " Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.520770 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lllf4\" (UniqueName: \"kubernetes.io/projected/f2d19d1f-3276-4409-8071-ccdac4eb4e6c-kube-api-access-lllf4\") pod \"f2d19d1f-3276-4409-8071-ccdac4eb4e6c\" (UID: \"f2d19d1f-3276-4409-8071-ccdac4eb4e6c\") " Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.520834 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kh7t\" (UniqueName: \"kubernetes.io/projected/d641d15b-7085-430a-9adb-69c0e94d52e3-kube-api-access-8kh7t\") pod \"d641d15b-7085-430a-9adb-69c0e94d52e3\" (UID: \"d641d15b-7085-430a-9adb-69c0e94d52e3\") " Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.521083 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a102ee-af73-49b8-8c30-094871ea6ae8-operator-scripts\") pod \"08a102ee-af73-49b8-8c30-094871ea6ae8\" (UID: \"08a102ee-af73-49b8-8c30-094871ea6ae8\") " Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.521138 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcbmw\" (UniqueName: \"kubernetes.io/projected/08a102ee-af73-49b8-8c30-094871ea6ae8-kube-api-access-pcbmw\") pod \"08a102ee-af73-49b8-8c30-094871ea6ae8\" (UID: \"08a102ee-af73-49b8-8c30-094871ea6ae8\") " Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.521622 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e737ec5b-5af1-4082-86b7-f6571ce8bd36-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e737ec5b-5af1-4082-86b7-f6571ce8bd36" (UID: "e737ec5b-5af1-4082-86b7-f6571ce8bd36"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.521618 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2d19d1f-3276-4409-8071-ccdac4eb4e6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2d19d1f-3276-4409-8071-ccdac4eb4e6c" (UID: "f2d19d1f-3276-4409-8071-ccdac4eb4e6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.521727 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08a102ee-af73-49b8-8c30-094871ea6ae8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08a102ee-af73-49b8-8c30-094871ea6ae8" (UID: "08a102ee-af73-49b8-8c30-094871ea6ae8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.521739 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d641d15b-7085-430a-9adb-69c0e94d52e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d641d15b-7085-430a-9adb-69c0e94d52e3" (UID: "d641d15b-7085-430a-9adb-69c0e94d52e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.527303 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d641d15b-7085-430a-9adb-69c0e94d52e3-kube-api-access-8kh7t" (OuterVolumeSpecName: "kube-api-access-8kh7t") pod "d641d15b-7085-430a-9adb-69c0e94d52e3" (UID: "d641d15b-7085-430a-9adb-69c0e94d52e3"). InnerVolumeSpecName "kube-api-access-8kh7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.527454 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e737ec5b-5af1-4082-86b7-f6571ce8bd36-kube-api-access-tdpwr" (OuterVolumeSpecName: "kube-api-access-tdpwr") pod "e737ec5b-5af1-4082-86b7-f6571ce8bd36" (UID: "e737ec5b-5af1-4082-86b7-f6571ce8bd36"). InnerVolumeSpecName "kube-api-access-tdpwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.527535 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08a102ee-af73-49b8-8c30-094871ea6ae8-kube-api-access-pcbmw" (OuterVolumeSpecName: "kube-api-access-pcbmw") pod "08a102ee-af73-49b8-8c30-094871ea6ae8" (UID: "08a102ee-af73-49b8-8c30-094871ea6ae8"). InnerVolumeSpecName "kube-api-access-pcbmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.532626 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2d19d1f-3276-4409-8071-ccdac4eb4e6c-kube-api-access-lllf4" (OuterVolumeSpecName: "kube-api-access-lllf4") pod "f2d19d1f-3276-4409-8071-ccdac4eb4e6c" (UID: "f2d19d1f-3276-4409-8071-ccdac4eb4e6c"). InnerVolumeSpecName "kube-api-access-lllf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.622254 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e05cffc-7368-4346-9b36-fbe0c99c2397-operator-scripts\") pod \"8e05cffc-7368-4346-9b36-fbe0c99c2397\" (UID: \"8e05cffc-7368-4346-9b36-fbe0c99c2397\") " Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.622313 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqrlc\" (UniqueName: \"kubernetes.io/projected/8e05cffc-7368-4346-9b36-fbe0c99c2397-kube-api-access-fqrlc\") pod \"8e05cffc-7368-4346-9b36-fbe0c99c2397\" (UID: \"8e05cffc-7368-4346-9b36-fbe0c99c2397\") " Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.622381 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b95b46e8-a0b5-4916-a724-41f7f25f0cd3-operator-scripts\") pod \"b95b46e8-a0b5-4916-a724-41f7f25f0cd3\" (UID: \"b95b46e8-a0b5-4916-a724-41f7f25f0cd3\") " Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.622427 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9t4d\" (UniqueName: \"kubernetes.io/projected/b95b46e8-a0b5-4916-a724-41f7f25f0cd3-kube-api-access-c9t4d\") pod \"b95b46e8-a0b5-4916-a724-41f7f25f0cd3\" (UID: \"b95b46e8-a0b5-4916-a724-41f7f25f0cd3\") " Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.622748 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a102ee-af73-49b8-8c30-094871ea6ae8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.622766 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcbmw\" (UniqueName: \"kubernetes.io/projected/08a102ee-af73-49b8-8c30-094871ea6ae8-kube-api-access-pcbmw\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.622776 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2d19d1f-3276-4409-8071-ccdac4eb4e6c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.622786 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e737ec5b-5af1-4082-86b7-f6571ce8bd36-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.622794 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d641d15b-7085-430a-9adb-69c0e94d52e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.622803 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdpwr\" (UniqueName: \"kubernetes.io/projected/e737ec5b-5af1-4082-86b7-f6571ce8bd36-kube-api-access-tdpwr\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.622813 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lllf4\" (UniqueName: \"kubernetes.io/projected/f2d19d1f-3276-4409-8071-ccdac4eb4e6c-kube-api-access-lllf4\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.622822 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kh7t\" (UniqueName: \"kubernetes.io/projected/d641d15b-7085-430a-9adb-69c0e94d52e3-kube-api-access-8kh7t\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.623074 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e05cffc-7368-4346-9b36-fbe0c99c2397-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e05cffc-7368-4346-9b36-fbe0c99c2397" (UID: "8e05cffc-7368-4346-9b36-fbe0c99c2397"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.623442 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b95b46e8-a0b5-4916-a724-41f7f25f0cd3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b95b46e8-a0b5-4916-a724-41f7f25f0cd3" (UID: "b95b46e8-a0b5-4916-a724-41f7f25f0cd3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.625414 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e05cffc-7368-4346-9b36-fbe0c99c2397-kube-api-access-fqrlc" (OuterVolumeSpecName: "kube-api-access-fqrlc") pod "8e05cffc-7368-4346-9b36-fbe0c99c2397" (UID: "8e05cffc-7368-4346-9b36-fbe0c99c2397"). InnerVolumeSpecName "kube-api-access-fqrlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.625899 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b95b46e8-a0b5-4916-a724-41f7f25f0cd3-kube-api-access-c9t4d" (OuterVolumeSpecName: "kube-api-access-c9t4d") pod "b95b46e8-a0b5-4916-a724-41f7f25f0cd3" (UID: "b95b46e8-a0b5-4916-a724-41f7f25f0cd3"). InnerVolumeSpecName "kube-api-access-c9t4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.724079 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b95b46e8-a0b5-4916-a724-41f7f25f0cd3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.724117 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9t4d\" (UniqueName: \"kubernetes.io/projected/b95b46e8-a0b5-4916-a724-41f7f25f0cd3-kube-api-access-c9t4d\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.724129 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e05cffc-7368-4346-9b36-fbe0c99c2397-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.724141 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqrlc\" (UniqueName: \"kubernetes.io/projected/8e05cffc-7368-4346-9b36-fbe0c99c2397-kube-api-access-fqrlc\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.803796 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-k79vw" event={"ID":"e737ec5b-5af1-4082-86b7-f6571ce8bd36","Type":"ContainerDied","Data":"93fd163604cc4e9e9df580118b80c79ec45a24548b5ae3f42cdbf7989562f66a"} Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.803841 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93fd163604cc4e9e9df580118b80c79ec45a24548b5ae3f42cdbf7989562f66a" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.803870 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-k79vw" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.805453 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d367-account-create-update-9kflk" event={"ID":"08a102ee-af73-49b8-8c30-094871ea6ae8","Type":"ContainerDied","Data":"20011366ac5802fc4961f8bdcc28c914057fe1a7e266c288d2fc6b7be5a09bd2"} Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.805475 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20011366ac5802fc4961f8bdcc28c914057fe1a7e266c288d2fc6b7be5a09bd2" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.805526 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d367-account-create-update-9kflk" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.808130 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-787wv" event={"ID":"8e05cffc-7368-4346-9b36-fbe0c99c2397","Type":"ContainerDied","Data":"9d3176c2eb6f6c2b1c09a96f60e6dfb230bca6f766dc328381d12a5c2d6457f2"} Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.808158 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d3176c2eb6f6c2b1c09a96f60e6dfb230bca6f766dc328381d12a5c2d6457f2" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.808162 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-787wv" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.809227 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b1a-account-create-update-2dfbx" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.809844 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b1a-account-create-update-2dfbx" event={"ID":"f2d19d1f-3276-4409-8071-ccdac4eb4e6c","Type":"ContainerDied","Data":"40167229db6e03a179f35c0a389cbf0a7e6f60e1321d0b946693bf4cd33ecf2e"} Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.809915 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40167229db6e03a179f35c0a389cbf0a7e6f60e1321d0b946693bf4cd33ecf2e" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.810808 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ff35-account-create-update-bcgxx" event={"ID":"b95b46e8-a0b5-4916-a724-41f7f25f0cd3","Type":"ContainerDied","Data":"252a2f5f09e8d6c82217dfa71981f19ea4de98f13d819a22ff9a3a70a61b0016"} Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.810831 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="252a2f5f09e8d6c82217dfa71981f19ea4de98f13d819a22ff9a3a70a61b0016" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.810874 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ff35-account-create-update-bcgxx" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.816002 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-st2vd" event={"ID":"d641d15b-7085-430a-9adb-69c0e94d52e3","Type":"ContainerDied","Data":"17c37b065f1b3ea28867a393d793aa95be6990ed8585f7cd762e5d8963ae89ba"} Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.816037 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17c37b065f1b3ea28867a393d793aa95be6990ed8585f7cd762e5d8963ae89ba" Jan 31 04:04:13 crc kubenswrapper[4827]: I0131 04:04:13.816072 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-st2vd" Jan 31 04:04:14 crc kubenswrapper[4827]: I0131 04:04:14.109393 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hk2cj" Jan 31 04:04:14 crc kubenswrapper[4827]: I0131 04:04:14.233508 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhhpw\" (UniqueName: \"kubernetes.io/projected/f8fe016d-b5e1-49a0-8f21-583a3fcbfe26-kube-api-access-jhhpw\") pod \"f8fe016d-b5e1-49a0-8f21-583a3fcbfe26\" (UID: \"f8fe016d-b5e1-49a0-8f21-583a3fcbfe26\") " Jan 31 04:04:14 crc kubenswrapper[4827]: I0131 04:04:14.233710 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8fe016d-b5e1-49a0-8f21-583a3fcbfe26-operator-scripts\") pod \"f8fe016d-b5e1-49a0-8f21-583a3fcbfe26\" (UID: \"f8fe016d-b5e1-49a0-8f21-583a3fcbfe26\") " Jan 31 04:04:14 crc kubenswrapper[4827]: I0131 04:04:14.234345 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8fe016d-b5e1-49a0-8f21-583a3fcbfe26-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8fe016d-b5e1-49a0-8f21-583a3fcbfe26" (UID: "f8fe016d-b5e1-49a0-8f21-583a3fcbfe26"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:14 crc kubenswrapper[4827]: I0131 04:04:14.237043 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8fe016d-b5e1-49a0-8f21-583a3fcbfe26-kube-api-access-jhhpw" (OuterVolumeSpecName: "kube-api-access-jhhpw") pod "f8fe016d-b5e1-49a0-8f21-583a3fcbfe26" (UID: "f8fe016d-b5e1-49a0-8f21-583a3fcbfe26"). InnerVolumeSpecName "kube-api-access-jhhpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:04:14 crc kubenswrapper[4827]: I0131 04:04:14.335267 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhhpw\" (UniqueName: \"kubernetes.io/projected/f8fe016d-b5e1-49a0-8f21-583a3fcbfe26-kube-api-access-jhhpw\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:14 crc kubenswrapper[4827]: I0131 04:04:14.335314 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8fe016d-b5e1-49a0-8f21-583a3fcbfe26-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:14 crc kubenswrapper[4827]: I0131 04:04:14.826668 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hk2cj" event={"ID":"f8fe016d-b5e1-49a0-8f21-583a3fcbfe26","Type":"ContainerDied","Data":"720ceefb33ad004b2d27c53f4fb31fad5f7340573a111ebee5717be5c9f76dad"} Jan 31 04:04:14 crc kubenswrapper[4827]: I0131 04:04:14.826703 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="720ceefb33ad004b2d27c53f4fb31fad5f7340573a111ebee5717be5c9f76dad" Jan 31 04:04:14 crc kubenswrapper[4827]: I0131 04:04:14.827803 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hk2cj" Jan 31 04:04:17 crc kubenswrapper[4827]: I0131 04:04:17.622861 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hk2cj"] Jan 31 04:04:17 crc kubenswrapper[4827]: I0131 04:04:17.636521 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hk2cj"] Jan 31 04:04:18 crc kubenswrapper[4827]: I0131 04:04:18.127110 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8fe016d-b5e1-49a0-8f21-583a3fcbfe26" path="/var/lib/kubelet/pods/f8fe016d-b5e1-49a0-8f21-583a3fcbfe26/volumes" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.628762 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-k45zm"] Jan 31 04:04:19 crc kubenswrapper[4827]: E0131 04:04:19.629349 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b95b46e8-a0b5-4916-a724-41f7f25f0cd3" containerName="mariadb-account-create-update" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.629362 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b95b46e8-a0b5-4916-a724-41f7f25f0cd3" containerName="mariadb-account-create-update" Jan 31 04:04:19 crc kubenswrapper[4827]: E0131 04:04:19.629379 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d641d15b-7085-430a-9adb-69c0e94d52e3" containerName="mariadb-database-create" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.629387 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d641d15b-7085-430a-9adb-69c0e94d52e3" containerName="mariadb-database-create" Jan 31 04:04:19 crc kubenswrapper[4827]: E0131 04:04:19.629402 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e737ec5b-5af1-4082-86b7-f6571ce8bd36" containerName="mariadb-database-create" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.629411 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e737ec5b-5af1-4082-86b7-f6571ce8bd36" containerName="mariadb-database-create" Jan 31 04:04:19 crc kubenswrapper[4827]: E0131 04:04:19.629423 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a102ee-af73-49b8-8c30-094871ea6ae8" containerName="mariadb-account-create-update" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.629431 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a102ee-af73-49b8-8c30-094871ea6ae8" containerName="mariadb-account-create-update" Jan 31 04:04:19 crc kubenswrapper[4827]: E0131 04:04:19.629439 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d19d1f-3276-4409-8071-ccdac4eb4e6c" containerName="mariadb-account-create-update" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.629445 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d19d1f-3276-4409-8071-ccdac4eb4e6c" containerName="mariadb-account-create-update" Jan 31 04:04:19 crc kubenswrapper[4827]: E0131 04:04:19.629454 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8fe016d-b5e1-49a0-8f21-583a3fcbfe26" containerName="mariadb-account-create-update" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.629459 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8fe016d-b5e1-49a0-8f21-583a3fcbfe26" containerName="mariadb-account-create-update" Jan 31 04:04:19 crc kubenswrapper[4827]: E0131 04:04:19.629472 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e05cffc-7368-4346-9b36-fbe0c99c2397" containerName="mariadb-database-create" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.629477 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e05cffc-7368-4346-9b36-fbe0c99c2397" containerName="mariadb-database-create" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.629628 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e737ec5b-5af1-4082-86b7-f6571ce8bd36" containerName="mariadb-database-create" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.629638 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="b95b46e8-a0b5-4916-a724-41f7f25f0cd3" containerName="mariadb-account-create-update" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.629651 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8fe016d-b5e1-49a0-8f21-583a3fcbfe26" containerName="mariadb-account-create-update" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.629665 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a102ee-af73-49b8-8c30-094871ea6ae8" containerName="mariadb-account-create-update" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.629676 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d641d15b-7085-430a-9adb-69c0e94d52e3" containerName="mariadb-database-create" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.629683 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2d19d1f-3276-4409-8071-ccdac4eb4e6c" containerName="mariadb-account-create-update" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.629690 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e05cffc-7368-4346-9b36-fbe0c99c2397" containerName="mariadb-database-create" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.630185 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-k45zm" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.634022 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-m4mld" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.635203 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.635607 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-k45zm"] Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.727615 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2566a364-c569-475e-b757-81be89061c81-db-sync-config-data\") pod \"glance-db-sync-k45zm\" (UID: \"2566a364-c569-475e-b757-81be89061c81\") " pod="openstack/glance-db-sync-k45zm" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.727912 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt52m\" (UniqueName: \"kubernetes.io/projected/2566a364-c569-475e-b757-81be89061c81-kube-api-access-tt52m\") pod \"glance-db-sync-k45zm\" (UID: \"2566a364-c569-475e-b757-81be89061c81\") " pod="openstack/glance-db-sync-k45zm" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.728278 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2566a364-c569-475e-b757-81be89061c81-config-data\") pod \"glance-db-sync-k45zm\" (UID: \"2566a364-c569-475e-b757-81be89061c81\") " pod="openstack/glance-db-sync-k45zm" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.728367 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2566a364-c569-475e-b757-81be89061c81-combined-ca-bundle\") pod \"glance-db-sync-k45zm\" (UID: \"2566a364-c569-475e-b757-81be89061c81\") " pod="openstack/glance-db-sync-k45zm" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.829715 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2566a364-c569-475e-b757-81be89061c81-db-sync-config-data\") pod \"glance-db-sync-k45zm\" (UID: \"2566a364-c569-475e-b757-81be89061c81\") " pod="openstack/glance-db-sync-k45zm" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.829841 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt52m\" (UniqueName: \"kubernetes.io/projected/2566a364-c569-475e-b757-81be89061c81-kube-api-access-tt52m\") pod \"glance-db-sync-k45zm\" (UID: \"2566a364-c569-475e-b757-81be89061c81\") " pod="openstack/glance-db-sync-k45zm" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.830018 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2566a364-c569-475e-b757-81be89061c81-config-data\") pod \"glance-db-sync-k45zm\" (UID: \"2566a364-c569-475e-b757-81be89061c81\") " pod="openstack/glance-db-sync-k45zm" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.830062 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2566a364-c569-475e-b757-81be89061c81-combined-ca-bundle\") pod \"glance-db-sync-k45zm\" (UID: \"2566a364-c569-475e-b757-81be89061c81\") " pod="openstack/glance-db-sync-k45zm" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.836379 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2566a364-c569-475e-b757-81be89061c81-config-data\") pod \"glance-db-sync-k45zm\" (UID: \"2566a364-c569-475e-b757-81be89061c81\") " pod="openstack/glance-db-sync-k45zm" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.836734 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2566a364-c569-475e-b757-81be89061c81-db-sync-config-data\") pod \"glance-db-sync-k45zm\" (UID: \"2566a364-c569-475e-b757-81be89061c81\") " pod="openstack/glance-db-sync-k45zm" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.839027 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2566a364-c569-475e-b757-81be89061c81-combined-ca-bundle\") pod \"glance-db-sync-k45zm\" (UID: \"2566a364-c569-475e-b757-81be89061c81\") " pod="openstack/glance-db-sync-k45zm" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.846185 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt52m\" (UniqueName: \"kubernetes.io/projected/2566a364-c569-475e-b757-81be89061c81-kube-api-access-tt52m\") pod \"glance-db-sync-k45zm\" (UID: \"2566a364-c569-475e-b757-81be89061c81\") " pod="openstack/glance-db-sync-k45zm" Jan 31 04:04:19 crc kubenswrapper[4827]: I0131 04:04:19.951431 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-k45zm" Jan 31 04:04:20 crc kubenswrapper[4827]: I0131 04:04:20.206436 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 31 04:04:20 crc kubenswrapper[4827]: I0131 04:04:20.635333 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-k45zm"] Jan 31 04:04:20 crc kubenswrapper[4827]: W0131 04:04:20.639670 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2566a364_c569_475e_b757_81be89061c81.slice/crio-8d231367d8d9582655360d1594aab8f4131f60ffc30dd0c3a96571e0f1b66846 WatchSource:0}: Error finding container 8d231367d8d9582655360d1594aab8f4131f60ffc30dd0c3a96571e0f1b66846: Status 404 returned error can't find the container with id 8d231367d8d9582655360d1594aab8f4131f60ffc30dd0c3a96571e0f1b66846 Jan 31 04:04:20 crc kubenswrapper[4827]: I0131 04:04:20.880809 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-k45zm" event={"ID":"2566a364-c569-475e-b757-81be89061c81","Type":"ContainerStarted","Data":"8d231367d8d9582655360d1594aab8f4131f60ffc30dd0c3a96571e0f1b66846"} Jan 31 04:04:22 crc kubenswrapper[4827]: I0131 04:04:22.603110 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-jdjsc"] Jan 31 04:04:22 crc kubenswrapper[4827]: I0131 04:04:22.604317 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jdjsc" Jan 31 04:04:22 crc kubenswrapper[4827]: I0131 04:04:22.607671 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 31 04:04:22 crc kubenswrapper[4827]: I0131 04:04:22.620709 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jdjsc"] Jan 31 04:04:22 crc kubenswrapper[4827]: I0131 04:04:22.774626 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh95x\" (UniqueName: \"kubernetes.io/projected/4fbae680-6791-4843-a38d-dce4d7531d9a-kube-api-access-fh95x\") pod \"root-account-create-update-jdjsc\" (UID: \"4fbae680-6791-4843-a38d-dce4d7531d9a\") " pod="openstack/root-account-create-update-jdjsc" Jan 31 04:04:22 crc kubenswrapper[4827]: I0131 04:04:22.774700 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fbae680-6791-4843-a38d-dce4d7531d9a-operator-scripts\") pod \"root-account-create-update-jdjsc\" (UID: \"4fbae680-6791-4843-a38d-dce4d7531d9a\") " pod="openstack/root-account-create-update-jdjsc" Jan 31 04:04:22 crc kubenswrapper[4827]: I0131 04:04:22.876028 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh95x\" (UniqueName: \"kubernetes.io/projected/4fbae680-6791-4843-a38d-dce4d7531d9a-kube-api-access-fh95x\") pod \"root-account-create-update-jdjsc\" (UID: \"4fbae680-6791-4843-a38d-dce4d7531d9a\") " pod="openstack/root-account-create-update-jdjsc" Jan 31 04:04:22 crc kubenswrapper[4827]: I0131 04:04:22.876123 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fbae680-6791-4843-a38d-dce4d7531d9a-operator-scripts\") pod \"root-account-create-update-jdjsc\" (UID: \"4fbae680-6791-4843-a38d-dce4d7531d9a\") " pod="openstack/root-account-create-update-jdjsc" Jan 31 04:04:22 crc kubenswrapper[4827]: I0131 04:04:22.877813 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fbae680-6791-4843-a38d-dce4d7531d9a-operator-scripts\") pod \"root-account-create-update-jdjsc\" (UID: \"4fbae680-6791-4843-a38d-dce4d7531d9a\") " pod="openstack/root-account-create-update-jdjsc" Jan 31 04:04:22 crc kubenswrapper[4827]: I0131 04:04:22.899116 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh95x\" (UniqueName: \"kubernetes.io/projected/4fbae680-6791-4843-a38d-dce4d7531d9a-kube-api-access-fh95x\") pod \"root-account-create-update-jdjsc\" (UID: \"4fbae680-6791-4843-a38d-dce4d7531d9a\") " pod="openstack/root-account-create-update-jdjsc" Jan 31 04:04:22 crc kubenswrapper[4827]: I0131 04:04:22.929657 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jdjsc" Jan 31 04:04:23 crc kubenswrapper[4827]: I0131 04:04:23.380917 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jdjsc"] Jan 31 04:04:23 crc kubenswrapper[4827]: I0131 04:04:23.905755 4827 generic.go:334] "Generic (PLEG): container finished" podID="4fbae680-6791-4843-a38d-dce4d7531d9a" containerID="3425b4ad9f4096a237dad07d34c50fc0d9a61f16f35854be754b868fa0cdee28" exitCode=0 Jan 31 04:04:23 crc kubenswrapper[4827]: I0131 04:04:23.905856 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jdjsc" event={"ID":"4fbae680-6791-4843-a38d-dce4d7531d9a","Type":"ContainerDied","Data":"3425b4ad9f4096a237dad07d34c50fc0d9a61f16f35854be754b868fa0cdee28"} Jan 31 04:04:23 crc kubenswrapper[4827]: I0131 04:04:23.906198 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jdjsc" event={"ID":"4fbae680-6791-4843-a38d-dce4d7531d9a","Type":"ContainerStarted","Data":"9ceb6ab0167ece257ce7f6d9461564d767ca49096f4f077d319ced98770b7527"} Jan 31 04:04:24 crc kubenswrapper[4827]: I0131 04:04:24.451894 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jrrb4" podUID="b0a1bcac-47e2-4089-ae1e-98a2dc41d270" containerName="ovn-controller" probeResult="failure" output=< Jan 31 04:04:24 crc kubenswrapper[4827]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 31 04:04:24 crc kubenswrapper[4827]: > Jan 31 04:04:25 crc kubenswrapper[4827]: I0131 04:04:25.247247 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jdjsc" Jan 31 04:04:25 crc kubenswrapper[4827]: I0131 04:04:25.331221 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fbae680-6791-4843-a38d-dce4d7531d9a-operator-scripts\") pod \"4fbae680-6791-4843-a38d-dce4d7531d9a\" (UID: \"4fbae680-6791-4843-a38d-dce4d7531d9a\") " Jan 31 04:04:25 crc kubenswrapper[4827]: I0131 04:04:25.331740 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh95x\" (UniqueName: \"kubernetes.io/projected/4fbae680-6791-4843-a38d-dce4d7531d9a-kube-api-access-fh95x\") pod \"4fbae680-6791-4843-a38d-dce4d7531d9a\" (UID: \"4fbae680-6791-4843-a38d-dce4d7531d9a\") " Jan 31 04:04:25 crc kubenswrapper[4827]: I0131 04:04:25.332467 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fbae680-6791-4843-a38d-dce4d7531d9a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4fbae680-6791-4843-a38d-dce4d7531d9a" (UID: "4fbae680-6791-4843-a38d-dce4d7531d9a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:25 crc kubenswrapper[4827]: I0131 04:04:25.337967 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fbae680-6791-4843-a38d-dce4d7531d9a-kube-api-access-fh95x" (OuterVolumeSpecName: "kube-api-access-fh95x") pod "4fbae680-6791-4843-a38d-dce4d7531d9a" (UID: "4fbae680-6791-4843-a38d-dce4d7531d9a"). InnerVolumeSpecName "kube-api-access-fh95x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:04:25 crc kubenswrapper[4827]: I0131 04:04:25.433185 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fbae680-6791-4843-a38d-dce4d7531d9a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:25 crc kubenswrapper[4827]: I0131 04:04:25.433221 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh95x\" (UniqueName: \"kubernetes.io/projected/4fbae680-6791-4843-a38d-dce4d7531d9a-kube-api-access-fh95x\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:25 crc kubenswrapper[4827]: I0131 04:04:25.921615 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jdjsc" event={"ID":"4fbae680-6791-4843-a38d-dce4d7531d9a","Type":"ContainerDied","Data":"9ceb6ab0167ece257ce7f6d9461564d767ca49096f4f077d319ced98770b7527"} Jan 31 04:04:25 crc kubenswrapper[4827]: I0131 04:04:25.921721 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ceb6ab0167ece257ce7f6d9461564d767ca49096f4f077d319ced98770b7527" Jan 31 04:04:25 crc kubenswrapper[4827]: I0131 04:04:25.922123 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jdjsc" Jan 31 04:04:26 crc kubenswrapper[4827]: I0131 04:04:26.930153 4827 generic.go:334] "Generic (PLEG): container finished" podID="237362ad-03ab-48a0-916d-1b140b4727d5" containerID="12fe2b54071f9726320953b5647ef853505a4030977779bf309ca7bddd85c632" exitCode=0 Jan 31 04:04:26 crc kubenswrapper[4827]: I0131 04:04:26.930204 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"237362ad-03ab-48a0-916d-1b140b4727d5","Type":"ContainerDied","Data":"12fe2b54071f9726320953b5647ef853505a4030977779bf309ca7bddd85c632"} Jan 31 04:04:26 crc kubenswrapper[4827]: I0131 04:04:26.932154 4827 generic.go:334] "Generic (PLEG): container finished" podID="02f954c7-6442-4974-827a-aef4a5690e8c" containerID="a3c94b6b53a56855eba7c9871d6fbc9d197f581c259d27c512976ebe225e2f9f" exitCode=0 Jan 31 04:04:26 crc kubenswrapper[4827]: I0131 04:04:26.932179 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"02f954c7-6442-4974-827a-aef4a5690e8c","Type":"ContainerDied","Data":"a3c94b6b53a56855eba7c9871d6fbc9d197f581c259d27c512976ebe225e2f9f"} Jan 31 04:04:29 crc kubenswrapper[4827]: I0131 04:04:29.444612 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jrrb4" podUID="b0a1bcac-47e2-4089-ae1e-98a2dc41d270" containerName="ovn-controller" probeResult="failure" output=< Jan 31 04:04:29 crc kubenswrapper[4827]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 31 04:04:29 crc kubenswrapper[4827]: > Jan 31 04:04:29 crc kubenswrapper[4827]: I0131 04:04:29.480604 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:04:29 crc kubenswrapper[4827]: I0131 04:04:29.493213 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-vhmf9" Jan 31 04:04:29 crc kubenswrapper[4827]: I0131 04:04:29.721252 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jrrb4-config-bcbfd"] Jan 31 04:04:29 crc kubenswrapper[4827]: E0131 04:04:29.721648 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fbae680-6791-4843-a38d-dce4d7531d9a" containerName="mariadb-account-create-update" Jan 31 04:04:29 crc kubenswrapper[4827]: I0131 04:04:29.721665 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fbae680-6791-4843-a38d-dce4d7531d9a" containerName="mariadb-account-create-update" Jan 31 04:04:29 crc kubenswrapper[4827]: I0131 04:04:29.721911 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fbae680-6791-4843-a38d-dce4d7531d9a" containerName="mariadb-account-create-update" Jan 31 04:04:29 crc kubenswrapper[4827]: I0131 04:04:29.722441 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:29 crc kubenswrapper[4827]: I0131 04:04:29.724405 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 31 04:04:29 crc kubenswrapper[4827]: I0131 04:04:29.735859 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jrrb4-config-bcbfd"] Jan 31 04:04:29 crc kubenswrapper[4827]: I0131 04:04:29.904149 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzhtv\" (UniqueName: \"kubernetes.io/projected/7db8dcc4-3475-4143-8651-d58b5fbde4fd-kube-api-access-lzhtv\") pod \"ovn-controller-jrrb4-config-bcbfd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:29 crc kubenswrapper[4827]: I0131 04:04:29.904218 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7db8dcc4-3475-4143-8651-d58b5fbde4fd-var-run\") pod \"ovn-controller-jrrb4-config-bcbfd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:29 crc kubenswrapper[4827]: I0131 04:04:29.904305 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7db8dcc4-3475-4143-8651-d58b5fbde4fd-var-log-ovn\") pod \"ovn-controller-jrrb4-config-bcbfd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:29 crc kubenswrapper[4827]: I0131 04:04:29.904338 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7db8dcc4-3475-4143-8651-d58b5fbde4fd-scripts\") pod \"ovn-controller-jrrb4-config-bcbfd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:29 crc kubenswrapper[4827]: I0131 04:04:29.904367 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7db8dcc4-3475-4143-8651-d58b5fbde4fd-additional-scripts\") pod \"ovn-controller-jrrb4-config-bcbfd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:29 crc kubenswrapper[4827]: I0131 04:04:29.904406 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7db8dcc4-3475-4143-8651-d58b5fbde4fd-var-run-ovn\") pod \"ovn-controller-jrrb4-config-bcbfd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:30 crc kubenswrapper[4827]: I0131 04:04:30.006172 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzhtv\" (UniqueName: \"kubernetes.io/projected/7db8dcc4-3475-4143-8651-d58b5fbde4fd-kube-api-access-lzhtv\") pod \"ovn-controller-jrrb4-config-bcbfd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:30 crc kubenswrapper[4827]: I0131 04:04:30.006236 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7db8dcc4-3475-4143-8651-d58b5fbde4fd-var-run\") pod \"ovn-controller-jrrb4-config-bcbfd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:30 crc kubenswrapper[4827]: I0131 04:04:30.006299 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7db8dcc4-3475-4143-8651-d58b5fbde4fd-var-log-ovn\") pod \"ovn-controller-jrrb4-config-bcbfd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:30 crc kubenswrapper[4827]: I0131 04:04:30.006320 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7db8dcc4-3475-4143-8651-d58b5fbde4fd-scripts\") pod \"ovn-controller-jrrb4-config-bcbfd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:30 crc kubenswrapper[4827]: I0131 04:04:30.006347 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7db8dcc4-3475-4143-8651-d58b5fbde4fd-additional-scripts\") pod \"ovn-controller-jrrb4-config-bcbfd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:30 crc kubenswrapper[4827]: I0131 04:04:30.006372 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7db8dcc4-3475-4143-8651-d58b5fbde4fd-var-run-ovn\") pod \"ovn-controller-jrrb4-config-bcbfd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:30 crc kubenswrapper[4827]: I0131 04:04:30.006657 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7db8dcc4-3475-4143-8651-d58b5fbde4fd-var-run-ovn\") pod \"ovn-controller-jrrb4-config-bcbfd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:30 crc kubenswrapper[4827]: I0131 04:04:30.006927 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7db8dcc4-3475-4143-8651-d58b5fbde4fd-var-run\") pod \"ovn-controller-jrrb4-config-bcbfd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:30 crc kubenswrapper[4827]: I0131 04:04:30.007106 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7db8dcc4-3475-4143-8651-d58b5fbde4fd-var-log-ovn\") pod \"ovn-controller-jrrb4-config-bcbfd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:30 crc kubenswrapper[4827]: I0131 04:04:30.008110 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7db8dcc4-3475-4143-8651-d58b5fbde4fd-additional-scripts\") pod \"ovn-controller-jrrb4-config-bcbfd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:30 crc kubenswrapper[4827]: I0131 04:04:30.009385 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7db8dcc4-3475-4143-8651-d58b5fbde4fd-scripts\") pod \"ovn-controller-jrrb4-config-bcbfd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:30 crc kubenswrapper[4827]: I0131 04:04:30.026920 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzhtv\" (UniqueName: \"kubernetes.io/projected/7db8dcc4-3475-4143-8651-d58b5fbde4fd-kube-api-access-lzhtv\") pod \"ovn-controller-jrrb4-config-bcbfd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:30 crc kubenswrapper[4827]: I0131 04:04:30.045862 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:34 crc kubenswrapper[4827]: I0131 04:04:34.179298 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jrrb4-config-bcbfd"] Jan 31 04:04:34 crc kubenswrapper[4827]: W0131 04:04:34.186633 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7db8dcc4_3475_4143_8651_d58b5fbde4fd.slice/crio-fb18d29be5b9c851f52c8bc721465b9e040a1dddf6b66051dbde495a17cd2420 WatchSource:0}: Error finding container fb18d29be5b9c851f52c8bc721465b9e040a1dddf6b66051dbde495a17cd2420: Status 404 returned error can't find the container with id fb18d29be5b9c851f52c8bc721465b9e040a1dddf6b66051dbde495a17cd2420 Jan 31 04:04:34 crc kubenswrapper[4827]: I0131 04:04:34.455433 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jrrb4" podUID="b0a1bcac-47e2-4089-ae1e-98a2dc41d270" containerName="ovn-controller" probeResult="failure" output=< Jan 31 04:04:34 crc kubenswrapper[4827]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 31 04:04:34 crc kubenswrapper[4827]: > Jan 31 04:04:35 crc kubenswrapper[4827]: I0131 04:04:35.014328 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"237362ad-03ab-48a0-916d-1b140b4727d5","Type":"ContainerStarted","Data":"3bef3482c6b5a8881586f34defacaf25b3b594be37c355711b648b85efc6694b"} Jan 31 04:04:35 crc kubenswrapper[4827]: I0131 04:04:35.015864 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 31 04:04:35 crc kubenswrapper[4827]: I0131 04:04:35.018532 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-k45zm" event={"ID":"2566a364-c569-475e-b757-81be89061c81","Type":"ContainerStarted","Data":"61b91764218d0f1084637f991df0fcccda7b250ca8fa610e4b470dfec48d4775"} Jan 31 04:04:35 crc kubenswrapper[4827]: I0131 04:04:35.025112 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"02f954c7-6442-4974-827a-aef4a5690e8c","Type":"ContainerStarted","Data":"2fcae6a3381f9d20d9bab34e4ab0e634c6a6c572a086dad05b16e7438d43479a"} Jan 31 04:04:35 crc kubenswrapper[4827]: I0131 04:04:35.025940 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:35 crc kubenswrapper[4827]: I0131 04:04:35.028125 4827 generic.go:334] "Generic (PLEG): container finished" podID="7db8dcc4-3475-4143-8651-d58b5fbde4fd" containerID="9c8a06745330142ba4cebfe3b7c45ec95048a1917a3d60fc36d35421468a23f6" exitCode=0 Jan 31 04:04:35 crc kubenswrapper[4827]: I0131 04:04:35.028176 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jrrb4-config-bcbfd" event={"ID":"7db8dcc4-3475-4143-8651-d58b5fbde4fd","Type":"ContainerDied","Data":"9c8a06745330142ba4cebfe3b7c45ec95048a1917a3d60fc36d35421468a23f6"} Jan 31 04:04:35 crc kubenswrapper[4827]: I0131 04:04:35.028205 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jrrb4-config-bcbfd" event={"ID":"7db8dcc4-3475-4143-8651-d58b5fbde4fd","Type":"ContainerStarted","Data":"fb18d29be5b9c851f52c8bc721465b9e040a1dddf6b66051dbde495a17cd2420"} Jan 31 04:04:35 crc kubenswrapper[4827]: I0131 04:04:35.048905 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=57.530851523 podStartE2EDuration="1m6.048862053s" podCreationTimestamp="2026-01-31 04:03:29 +0000 UTC" firstStartedPulling="2026-01-31 04:03:42.908473021 +0000 UTC m=+1015.595553470" lastFinishedPulling="2026-01-31 04:03:51.426483551 +0000 UTC m=+1024.113564000" observedRunningTime="2026-01-31 04:04:35.043997915 +0000 UTC m=+1067.731078424" watchObservedRunningTime="2026-01-31 04:04:35.048862053 +0000 UTC m=+1067.735942542" Jan 31 04:04:35 crc kubenswrapper[4827]: I0131 04:04:35.093839 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-k45zm" podStartSLOduration=2.900438102 podStartE2EDuration="16.093810542s" podCreationTimestamp="2026-01-31 04:04:19 +0000 UTC" firstStartedPulling="2026-01-31 04:04:20.641791104 +0000 UTC m=+1053.328871563" lastFinishedPulling="2026-01-31 04:04:33.835163564 +0000 UTC m=+1066.522244003" observedRunningTime="2026-01-31 04:04:35.090479069 +0000 UTC m=+1067.777559558" watchObservedRunningTime="2026-01-31 04:04:35.093810542 +0000 UTC m=+1067.780891031" Jan 31 04:04:35 crc kubenswrapper[4827]: I0131 04:04:35.122639 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=58.058057882 podStartE2EDuration="1m6.122620326s" podCreationTimestamp="2026-01-31 04:03:29 +0000 UTC" firstStartedPulling="2026-01-31 04:03:43.37182044 +0000 UTC m=+1016.058900879" lastFinishedPulling="2026-01-31 04:03:51.436382874 +0000 UTC m=+1024.123463323" observedRunningTime="2026-01-31 04:04:35.119243882 +0000 UTC m=+1067.806324351" watchObservedRunningTime="2026-01-31 04:04:35.122620326 +0000 UTC m=+1067.809700775" Jan 31 04:04:36 crc kubenswrapper[4827]: I0131 04:04:36.382095 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:36 crc kubenswrapper[4827]: I0131 04:04:36.541099 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7db8dcc4-3475-4143-8651-d58b5fbde4fd-var-run\") pod \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " Jan 31 04:04:36 crc kubenswrapper[4827]: I0131 04:04:36.541445 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7db8dcc4-3475-4143-8651-d58b5fbde4fd-scripts\") pod \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " Jan 31 04:04:36 crc kubenswrapper[4827]: I0131 04:04:36.541483 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7db8dcc4-3475-4143-8651-d58b5fbde4fd-additional-scripts\") pod \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " Jan 31 04:04:36 crc kubenswrapper[4827]: I0131 04:04:36.541234 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7db8dcc4-3475-4143-8651-d58b5fbde4fd-var-run" (OuterVolumeSpecName: "var-run") pod "7db8dcc4-3475-4143-8651-d58b5fbde4fd" (UID: "7db8dcc4-3475-4143-8651-d58b5fbde4fd"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:04:36 crc kubenswrapper[4827]: I0131 04:04:36.541527 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzhtv\" (UniqueName: \"kubernetes.io/projected/7db8dcc4-3475-4143-8651-d58b5fbde4fd-kube-api-access-lzhtv\") pod \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " Jan 31 04:04:36 crc kubenswrapper[4827]: I0131 04:04:36.541574 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7db8dcc4-3475-4143-8651-d58b5fbde4fd-var-run-ovn\") pod \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " Jan 31 04:04:36 crc kubenswrapper[4827]: I0131 04:04:36.541633 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7db8dcc4-3475-4143-8651-d58b5fbde4fd-var-log-ovn\") pod \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\" (UID: \"7db8dcc4-3475-4143-8651-d58b5fbde4fd\") " Jan 31 04:04:36 crc kubenswrapper[4827]: I0131 04:04:36.541672 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7db8dcc4-3475-4143-8651-d58b5fbde4fd-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7db8dcc4-3475-4143-8651-d58b5fbde4fd" (UID: "7db8dcc4-3475-4143-8651-d58b5fbde4fd"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:04:36 crc kubenswrapper[4827]: I0131 04:04:36.541765 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7db8dcc4-3475-4143-8651-d58b5fbde4fd-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7db8dcc4-3475-4143-8651-d58b5fbde4fd" (UID: "7db8dcc4-3475-4143-8651-d58b5fbde4fd"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:04:36 crc kubenswrapper[4827]: I0131 04:04:36.541991 4827 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7db8dcc4-3475-4143-8651-d58b5fbde4fd-var-run\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:36 crc kubenswrapper[4827]: I0131 04:04:36.542012 4827 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7db8dcc4-3475-4143-8651-d58b5fbde4fd-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:36 crc kubenswrapper[4827]: I0131 04:04:36.542023 4827 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7db8dcc4-3475-4143-8651-d58b5fbde4fd-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:36 crc kubenswrapper[4827]: I0131 04:04:36.542351 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7db8dcc4-3475-4143-8651-d58b5fbde4fd-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7db8dcc4-3475-4143-8651-d58b5fbde4fd" (UID: "7db8dcc4-3475-4143-8651-d58b5fbde4fd"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:36 crc kubenswrapper[4827]: I0131 04:04:36.542533 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7db8dcc4-3475-4143-8651-d58b5fbde4fd-scripts" (OuterVolumeSpecName: "scripts") pod "7db8dcc4-3475-4143-8651-d58b5fbde4fd" (UID: "7db8dcc4-3475-4143-8651-d58b5fbde4fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:36 crc kubenswrapper[4827]: I0131 04:04:36.553821 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7db8dcc4-3475-4143-8651-d58b5fbde4fd-kube-api-access-lzhtv" (OuterVolumeSpecName: "kube-api-access-lzhtv") pod "7db8dcc4-3475-4143-8651-d58b5fbde4fd" (UID: "7db8dcc4-3475-4143-8651-d58b5fbde4fd"). InnerVolumeSpecName "kube-api-access-lzhtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:04:36 crc kubenswrapper[4827]: I0131 04:04:36.643445 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7db8dcc4-3475-4143-8651-d58b5fbde4fd-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:36 crc kubenswrapper[4827]: I0131 04:04:36.643479 4827 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7db8dcc4-3475-4143-8651-d58b5fbde4fd-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:36 crc kubenswrapper[4827]: I0131 04:04:36.643490 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzhtv\" (UniqueName: \"kubernetes.io/projected/7db8dcc4-3475-4143-8651-d58b5fbde4fd-kube-api-access-lzhtv\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:37 crc kubenswrapper[4827]: I0131 04:04:37.071668 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jrrb4-config-bcbfd" event={"ID":"7db8dcc4-3475-4143-8651-d58b5fbde4fd","Type":"ContainerDied","Data":"fb18d29be5b9c851f52c8bc721465b9e040a1dddf6b66051dbde495a17cd2420"} Jan 31 04:04:37 crc kubenswrapper[4827]: I0131 04:04:37.071820 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb18d29be5b9c851f52c8bc721465b9e040a1dddf6b66051dbde495a17cd2420" Jan 31 04:04:37 crc kubenswrapper[4827]: I0131 04:04:37.072216 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jrrb4-config-bcbfd" Jan 31 04:04:37 crc kubenswrapper[4827]: I0131 04:04:37.484463 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-jrrb4-config-bcbfd"] Jan 31 04:04:37 crc kubenswrapper[4827]: I0131 04:04:37.490246 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-jrrb4-config-bcbfd"] Jan 31 04:04:38 crc kubenswrapper[4827]: I0131 04:04:38.123272 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7db8dcc4-3475-4143-8651-d58b5fbde4fd" path="/var/lib/kubelet/pods/7db8dcc4-3475-4143-8651-d58b5fbde4fd/volumes" Jan 31 04:04:39 crc kubenswrapper[4827]: I0131 04:04:39.501538 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-jrrb4" Jan 31 04:04:40 crc kubenswrapper[4827]: I0131 04:04:40.097442 4827 generic.go:334] "Generic (PLEG): container finished" podID="2566a364-c569-475e-b757-81be89061c81" containerID="61b91764218d0f1084637f991df0fcccda7b250ca8fa610e4b470dfec48d4775" exitCode=0 Jan 31 04:04:40 crc kubenswrapper[4827]: I0131 04:04:40.097559 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-k45zm" event={"ID":"2566a364-c569-475e-b757-81be89061c81","Type":"ContainerDied","Data":"61b91764218d0f1084637f991df0fcccda7b250ca8fa610e4b470dfec48d4775"} Jan 31 04:04:41 crc kubenswrapper[4827]: I0131 04:04:41.527130 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-k45zm" Jan 31 04:04:41 crc kubenswrapper[4827]: I0131 04:04:41.671353 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2566a364-c569-475e-b757-81be89061c81-combined-ca-bundle\") pod \"2566a364-c569-475e-b757-81be89061c81\" (UID: \"2566a364-c569-475e-b757-81be89061c81\") " Jan 31 04:04:41 crc kubenswrapper[4827]: I0131 04:04:41.671496 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2566a364-c569-475e-b757-81be89061c81-config-data\") pod \"2566a364-c569-475e-b757-81be89061c81\" (UID: \"2566a364-c569-475e-b757-81be89061c81\") " Jan 31 04:04:41 crc kubenswrapper[4827]: I0131 04:04:41.671563 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt52m\" (UniqueName: \"kubernetes.io/projected/2566a364-c569-475e-b757-81be89061c81-kube-api-access-tt52m\") pod \"2566a364-c569-475e-b757-81be89061c81\" (UID: \"2566a364-c569-475e-b757-81be89061c81\") " Jan 31 04:04:41 crc kubenswrapper[4827]: I0131 04:04:41.671614 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2566a364-c569-475e-b757-81be89061c81-db-sync-config-data\") pod \"2566a364-c569-475e-b757-81be89061c81\" (UID: \"2566a364-c569-475e-b757-81be89061c81\") " Jan 31 04:04:41 crc kubenswrapper[4827]: I0131 04:04:41.680720 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2566a364-c569-475e-b757-81be89061c81-kube-api-access-tt52m" (OuterVolumeSpecName: "kube-api-access-tt52m") pod "2566a364-c569-475e-b757-81be89061c81" (UID: "2566a364-c569-475e-b757-81be89061c81"). InnerVolumeSpecName "kube-api-access-tt52m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:04:41 crc kubenswrapper[4827]: I0131 04:04:41.681096 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2566a364-c569-475e-b757-81be89061c81-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2566a364-c569-475e-b757-81be89061c81" (UID: "2566a364-c569-475e-b757-81be89061c81"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:04:41 crc kubenswrapper[4827]: I0131 04:04:41.715077 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2566a364-c569-475e-b757-81be89061c81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2566a364-c569-475e-b757-81be89061c81" (UID: "2566a364-c569-475e-b757-81be89061c81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:04:41 crc kubenswrapper[4827]: I0131 04:04:41.749983 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2566a364-c569-475e-b757-81be89061c81-config-data" (OuterVolumeSpecName: "config-data") pod "2566a364-c569-475e-b757-81be89061c81" (UID: "2566a364-c569-475e-b757-81be89061c81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:04:41 crc kubenswrapper[4827]: I0131 04:04:41.774568 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2566a364-c569-475e-b757-81be89061c81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:41 crc kubenswrapper[4827]: I0131 04:04:41.774617 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2566a364-c569-475e-b757-81be89061c81-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:41 crc kubenswrapper[4827]: I0131 04:04:41.774633 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt52m\" (UniqueName: \"kubernetes.io/projected/2566a364-c569-475e-b757-81be89061c81-kube-api-access-tt52m\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:41 crc kubenswrapper[4827]: I0131 04:04:41.774651 4827 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2566a364-c569-475e-b757-81be89061c81-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.117091 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-k45zm" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.119104 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-k45zm" event={"ID":"2566a364-c569-475e-b757-81be89061c81","Type":"ContainerDied","Data":"8d231367d8d9582655360d1594aab8f4131f60ffc30dd0c3a96571e0f1b66846"} Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.119143 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d231367d8d9582655360d1594aab8f4131f60ffc30dd0c3a96571e0f1b66846" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.555931 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-jmjqr"] Jan 31 04:04:42 crc kubenswrapper[4827]: E0131 04:04:42.556199 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db8dcc4-3475-4143-8651-d58b5fbde4fd" containerName="ovn-config" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.556211 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db8dcc4-3475-4143-8651-d58b5fbde4fd" containerName="ovn-config" Jan 31 04:04:42 crc kubenswrapper[4827]: E0131 04:04:42.556233 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2566a364-c569-475e-b757-81be89061c81" containerName="glance-db-sync" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.556240 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="2566a364-c569-475e-b757-81be89061c81" containerName="glance-db-sync" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.556404 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="7db8dcc4-3475-4143-8651-d58b5fbde4fd" containerName="ovn-config" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.556424 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="2566a364-c569-475e-b757-81be89061c81" containerName="glance-db-sync" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.557121 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.586701 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-jmjqr\" (UID: \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.586781 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-jmjqr\" (UID: \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.586809 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-jmjqr\" (UID: \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.586850 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4tkp\" (UniqueName: \"kubernetes.io/projected/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-kube-api-access-z4tkp\") pod \"dnsmasq-dns-54f9b7b8d9-jmjqr\" (UID: \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.586904 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-config\") pod \"dnsmasq-dns-54f9b7b8d9-jmjqr\" (UID: \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.588816 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-jmjqr"] Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.687734 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4tkp\" (UniqueName: \"kubernetes.io/projected/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-kube-api-access-z4tkp\") pod \"dnsmasq-dns-54f9b7b8d9-jmjqr\" (UID: \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.687813 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-config\") pod \"dnsmasq-dns-54f9b7b8d9-jmjqr\" (UID: \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.687848 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-jmjqr\" (UID: \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.687912 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-jmjqr\" (UID: \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.687934 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-jmjqr\" (UID: \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.689237 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-config\") pod \"dnsmasq-dns-54f9b7b8d9-jmjqr\" (UID: \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.689262 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-jmjqr\" (UID: \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.689269 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-jmjqr\" (UID: \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.689321 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-jmjqr\" (UID: \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.719441 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4tkp\" (UniqueName: \"kubernetes.io/projected/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-kube-api-access-z4tkp\") pod \"dnsmasq-dns-54f9b7b8d9-jmjqr\" (UID: \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" Jan 31 04:04:42 crc kubenswrapper[4827]: I0131 04:04:42.880134 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" Jan 31 04:04:43 crc kubenswrapper[4827]: I0131 04:04:43.157909 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-jmjqr"] Jan 31 04:04:44 crc kubenswrapper[4827]: I0131 04:04:44.137789 4827 generic.go:334] "Generic (PLEG): container finished" podID="c5b9b010-7f89-4783-8dbe-d99a70ed06dc" containerID="e9e050e91be2e39f24268cfdaccc9695df83bb4d89156713e783315b16fb5419" exitCode=0 Jan 31 04:04:44 crc kubenswrapper[4827]: I0131 04:04:44.138192 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" event={"ID":"c5b9b010-7f89-4783-8dbe-d99a70ed06dc","Type":"ContainerDied","Data":"e9e050e91be2e39f24268cfdaccc9695df83bb4d89156713e783315b16fb5419"} Jan 31 04:04:44 crc kubenswrapper[4827]: I0131 04:04:44.138230 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" event={"ID":"c5b9b010-7f89-4783-8dbe-d99a70ed06dc","Type":"ContainerStarted","Data":"5b61536ac0037eb444f91d17aef243de79df784bb1cc5bd93c45edcf77898664"} Jan 31 04:04:45 crc kubenswrapper[4827]: I0131 04:04:45.153417 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" event={"ID":"c5b9b010-7f89-4783-8dbe-d99a70ed06dc","Type":"ContainerStarted","Data":"d27d8c56ccddb6fefc46a276a6c005c968705931bad7da365a2488c039067002"} Jan 31 04:04:45 crc kubenswrapper[4827]: I0131 04:04:45.154075 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" Jan 31 04:04:45 crc kubenswrapper[4827]: I0131 04:04:45.180438 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" podStartSLOduration=3.180422545 podStartE2EDuration="3.180422545s" podCreationTimestamp="2026-01-31 04:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:04:45.177145684 +0000 UTC m=+1077.864226133" watchObservedRunningTime="2026-01-31 04:04:45.180422545 +0000 UTC m=+1077.867502994" Jan 31 04:04:47 crc kubenswrapper[4827]: I0131 04:04:47.371470 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:04:47 crc kubenswrapper[4827]: I0131 04:04:47.371872 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:04:50 crc kubenswrapper[4827]: I0131 04:04:50.981657 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.307941 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.360657 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-sgdlb"] Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.363756 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sgdlb" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.376872 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-sgdlb"] Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.460400 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-c5fwt"] Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.461840 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-c5fwt" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.463288 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-848rs\" (UniqueName: \"kubernetes.io/projected/afb22e97-f599-49e5-8cde-ddb7bb682dd4-kube-api-access-848rs\") pod \"cinder-db-create-sgdlb\" (UID: \"afb22e97-f599-49e5-8cde-ddb7bb682dd4\") " pod="openstack/cinder-db-create-sgdlb" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.463328 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb22e97-f599-49e5-8cde-ddb7bb682dd4-operator-scripts\") pod \"cinder-db-create-sgdlb\" (UID: \"afb22e97-f599-49e5-8cde-ddb7bb682dd4\") " pod="openstack/cinder-db-create-sgdlb" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.506568 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-c5fwt"] Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.565255 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-848rs\" (UniqueName: \"kubernetes.io/projected/afb22e97-f599-49e5-8cde-ddb7bb682dd4-kube-api-access-848rs\") pod \"cinder-db-create-sgdlb\" (UID: \"afb22e97-f599-49e5-8cde-ddb7bb682dd4\") " pod="openstack/cinder-db-create-sgdlb" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.565319 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb22e97-f599-49e5-8cde-ddb7bb682dd4-operator-scripts\") pod \"cinder-db-create-sgdlb\" (UID: \"afb22e97-f599-49e5-8cde-ddb7bb682dd4\") " pod="openstack/cinder-db-create-sgdlb" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.565377 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kxjb\" (UniqueName: \"kubernetes.io/projected/b9330fdf-710f-4fff-b064-d5ab07f73cb2-kube-api-access-7kxjb\") pod \"barbican-db-create-c5fwt\" (UID: \"b9330fdf-710f-4fff-b064-d5ab07f73cb2\") " pod="openstack/barbican-db-create-c5fwt" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.565466 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9330fdf-710f-4fff-b064-d5ab07f73cb2-operator-scripts\") pod \"barbican-db-create-c5fwt\" (UID: \"b9330fdf-710f-4fff-b064-d5ab07f73cb2\") " pod="openstack/barbican-db-create-c5fwt" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.566080 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb22e97-f599-49e5-8cde-ddb7bb682dd4-operator-scripts\") pod \"cinder-db-create-sgdlb\" (UID: \"afb22e97-f599-49e5-8cde-ddb7bb682dd4\") " pod="openstack/cinder-db-create-sgdlb" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.571466 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e795-account-create-update-6vlcb"] Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.572707 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e795-account-create-update-6vlcb" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.574639 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.583564 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e795-account-create-update-6vlcb"] Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.586226 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-848rs\" (UniqueName: \"kubernetes.io/projected/afb22e97-f599-49e5-8cde-ddb7bb682dd4-kube-api-access-848rs\") pod \"cinder-db-create-sgdlb\" (UID: \"afb22e97-f599-49e5-8cde-ddb7bb682dd4\") " pod="openstack/cinder-db-create-sgdlb" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.659174 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-xmrgk"] Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.660364 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xmrgk" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.663576 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.663777 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.663927 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6bbzt" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.664052 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.665247 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-d351-account-create-update-jnlqc"] Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.666140 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d351-account-create-update-jnlqc" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.666453 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9330fdf-710f-4fff-b064-d5ab07f73cb2-operator-scripts\") pod \"barbican-db-create-c5fwt\" (UID: \"b9330fdf-710f-4fff-b064-d5ab07f73cb2\") " pod="openstack/barbican-db-create-c5fwt" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.666545 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7nxx\" (UniqueName: \"kubernetes.io/projected/8cba650c-b1cb-43e0-b831-d2289e50036f-kube-api-access-h7nxx\") pod \"cinder-e795-account-create-update-6vlcb\" (UID: \"8cba650c-b1cb-43e0-b831-d2289e50036f\") " pod="openstack/cinder-e795-account-create-update-6vlcb" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.666585 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cba650c-b1cb-43e0-b831-d2289e50036f-operator-scripts\") pod \"cinder-e795-account-create-update-6vlcb\" (UID: \"8cba650c-b1cb-43e0-b831-d2289e50036f\") " pod="openstack/cinder-e795-account-create-update-6vlcb" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.666608 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kxjb\" (UniqueName: \"kubernetes.io/projected/b9330fdf-710f-4fff-b064-d5ab07f73cb2-kube-api-access-7kxjb\") pod \"barbican-db-create-c5fwt\" (UID: \"b9330fdf-710f-4fff-b064-d5ab07f73cb2\") " pod="openstack/barbican-db-create-c5fwt" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.667311 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9330fdf-710f-4fff-b064-d5ab07f73cb2-operator-scripts\") pod \"barbican-db-create-c5fwt\" (UID: \"b9330fdf-710f-4fff-b064-d5ab07f73cb2\") " pod="openstack/barbican-db-create-c5fwt" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.667560 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.681065 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sgdlb" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.684709 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-xmrgk"] Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.688675 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kxjb\" (UniqueName: \"kubernetes.io/projected/b9330fdf-710f-4fff-b064-d5ab07f73cb2-kube-api-access-7kxjb\") pod \"barbican-db-create-c5fwt\" (UID: \"b9330fdf-710f-4fff-b064-d5ab07f73cb2\") " pod="openstack/barbican-db-create-c5fwt" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.746757 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d351-account-create-update-jnlqc"] Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.765925 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-5kr5x"] Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.766912 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5kr5x" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.768681 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75ab4449-6cdb-4b40-a0f0-432667f4ca97-config-data\") pod \"keystone-db-sync-xmrgk\" (UID: \"75ab4449-6cdb-4b40-a0f0-432667f4ca97\") " pod="openstack/keystone-db-sync-xmrgk" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.768763 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7nxx\" (UniqueName: \"kubernetes.io/projected/8cba650c-b1cb-43e0-b831-d2289e50036f-kube-api-access-h7nxx\") pod \"cinder-e795-account-create-update-6vlcb\" (UID: \"8cba650c-b1cb-43e0-b831-d2289e50036f\") " pod="openstack/cinder-e795-account-create-update-6vlcb" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.768784 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ab4449-6cdb-4b40-a0f0-432667f4ca97-combined-ca-bundle\") pod \"keystone-db-sync-xmrgk\" (UID: \"75ab4449-6cdb-4b40-a0f0-432667f4ca97\") " pod="openstack/keystone-db-sync-xmrgk" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.768803 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tl5k\" (UniqueName: \"kubernetes.io/projected/1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2-kube-api-access-5tl5k\") pod \"barbican-d351-account-create-update-jnlqc\" (UID: \"1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2\") " pod="openstack/barbican-d351-account-create-update-jnlqc" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.768831 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cba650c-b1cb-43e0-b831-d2289e50036f-operator-scripts\") pod \"cinder-e795-account-create-update-6vlcb\" (UID: \"8cba650c-b1cb-43e0-b831-d2289e50036f\") " pod="openstack/cinder-e795-account-create-update-6vlcb" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.768901 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2-operator-scripts\") pod \"barbican-d351-account-create-update-jnlqc\" (UID: \"1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2\") " pod="openstack/barbican-d351-account-create-update-jnlqc" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.768943 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjqfn\" (UniqueName: \"kubernetes.io/projected/75ab4449-6cdb-4b40-a0f0-432667f4ca97-kube-api-access-tjqfn\") pod \"keystone-db-sync-xmrgk\" (UID: \"75ab4449-6cdb-4b40-a0f0-432667f4ca97\") " pod="openstack/keystone-db-sync-xmrgk" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.769382 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cba650c-b1cb-43e0-b831-d2289e50036f-operator-scripts\") pod \"cinder-e795-account-create-update-6vlcb\" (UID: \"8cba650c-b1cb-43e0-b831-d2289e50036f\") " pod="openstack/cinder-e795-account-create-update-6vlcb" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.776044 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5kr5x"] Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.784668 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-c5fwt" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.812627 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7nxx\" (UniqueName: \"kubernetes.io/projected/8cba650c-b1cb-43e0-b831-d2289e50036f-kube-api-access-h7nxx\") pod \"cinder-e795-account-create-update-6vlcb\" (UID: \"8cba650c-b1cb-43e0-b831-d2289e50036f\") " pod="openstack/cinder-e795-account-create-update-6vlcb" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.870157 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ab4449-6cdb-4b40-a0f0-432667f4ca97-combined-ca-bundle\") pod \"keystone-db-sync-xmrgk\" (UID: \"75ab4449-6cdb-4b40-a0f0-432667f4ca97\") " pod="openstack/keystone-db-sync-xmrgk" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.870208 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tl5k\" (UniqueName: \"kubernetes.io/projected/1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2-kube-api-access-5tl5k\") pod \"barbican-d351-account-create-update-jnlqc\" (UID: \"1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2\") " pod="openstack/barbican-d351-account-create-update-jnlqc" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.870268 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jncxv\" (UniqueName: \"kubernetes.io/projected/470ad1f2-aae7-4a4c-8258-648066d14ec9-kube-api-access-jncxv\") pod \"neutron-db-create-5kr5x\" (UID: \"470ad1f2-aae7-4a4c-8258-648066d14ec9\") " pod="openstack/neutron-db-create-5kr5x" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.870293 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2-operator-scripts\") pod \"barbican-d351-account-create-update-jnlqc\" (UID: \"1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2\") " pod="openstack/barbican-d351-account-create-update-jnlqc" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.870334 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjqfn\" (UniqueName: \"kubernetes.io/projected/75ab4449-6cdb-4b40-a0f0-432667f4ca97-kube-api-access-tjqfn\") pod \"keystone-db-sync-xmrgk\" (UID: \"75ab4449-6cdb-4b40-a0f0-432667f4ca97\") " pod="openstack/keystone-db-sync-xmrgk" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.870355 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/470ad1f2-aae7-4a4c-8258-648066d14ec9-operator-scripts\") pod \"neutron-db-create-5kr5x\" (UID: \"470ad1f2-aae7-4a4c-8258-648066d14ec9\") " pod="openstack/neutron-db-create-5kr5x" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.870394 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75ab4449-6cdb-4b40-a0f0-432667f4ca97-config-data\") pod \"keystone-db-sync-xmrgk\" (UID: \"75ab4449-6cdb-4b40-a0f0-432667f4ca97\") " pod="openstack/keystone-db-sync-xmrgk" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.871807 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2-operator-scripts\") pod \"barbican-d351-account-create-update-jnlqc\" (UID: \"1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2\") " pod="openstack/barbican-d351-account-create-update-jnlqc" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.875289 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75ab4449-6cdb-4b40-a0f0-432667f4ca97-config-data\") pod \"keystone-db-sync-xmrgk\" (UID: \"75ab4449-6cdb-4b40-a0f0-432667f4ca97\") " pod="openstack/keystone-db-sync-xmrgk" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.879624 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ab4449-6cdb-4b40-a0f0-432667f4ca97-combined-ca-bundle\") pod \"keystone-db-sync-xmrgk\" (UID: \"75ab4449-6cdb-4b40-a0f0-432667f4ca97\") " pod="openstack/keystone-db-sync-xmrgk" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.887154 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e795-account-create-update-6vlcb" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.891963 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3471-account-create-update-g5xhk"] Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.892943 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3471-account-create-update-g5xhk" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.898406 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjqfn\" (UniqueName: \"kubernetes.io/projected/75ab4449-6cdb-4b40-a0f0-432667f4ca97-kube-api-access-tjqfn\") pod \"keystone-db-sync-xmrgk\" (UID: \"75ab4449-6cdb-4b40-a0f0-432667f4ca97\") " pod="openstack/keystone-db-sync-xmrgk" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.898559 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.904010 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tl5k\" (UniqueName: \"kubernetes.io/projected/1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2-kube-api-access-5tl5k\") pod \"barbican-d351-account-create-update-jnlqc\" (UID: \"1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2\") " pod="openstack/barbican-d351-account-create-update-jnlqc" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.907295 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3471-account-create-update-g5xhk"] Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.972672 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f0110cb-d427-4d1b-a2d1-551270a63093-operator-scripts\") pod \"neutron-3471-account-create-update-g5xhk\" (UID: \"1f0110cb-d427-4d1b-a2d1-551270a63093\") " pod="openstack/neutron-3471-account-create-update-g5xhk" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.972763 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jncxv\" (UniqueName: \"kubernetes.io/projected/470ad1f2-aae7-4a4c-8258-648066d14ec9-kube-api-access-jncxv\") pod \"neutron-db-create-5kr5x\" (UID: \"470ad1f2-aae7-4a4c-8258-648066d14ec9\") " pod="openstack/neutron-db-create-5kr5x" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.972806 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/470ad1f2-aae7-4a4c-8258-648066d14ec9-operator-scripts\") pod \"neutron-db-create-5kr5x\" (UID: \"470ad1f2-aae7-4a4c-8258-648066d14ec9\") " pod="openstack/neutron-db-create-5kr5x" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.972839 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6wvg\" (UniqueName: \"kubernetes.io/projected/1f0110cb-d427-4d1b-a2d1-551270a63093-kube-api-access-h6wvg\") pod \"neutron-3471-account-create-update-g5xhk\" (UID: \"1f0110cb-d427-4d1b-a2d1-551270a63093\") " pod="openstack/neutron-3471-account-create-update-g5xhk" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.973742 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/470ad1f2-aae7-4a4c-8258-648066d14ec9-operator-scripts\") pod \"neutron-db-create-5kr5x\" (UID: \"470ad1f2-aae7-4a4c-8258-648066d14ec9\") " pod="openstack/neutron-db-create-5kr5x" Jan 31 04:04:51 crc kubenswrapper[4827]: I0131 04:04:51.988446 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jncxv\" (UniqueName: \"kubernetes.io/projected/470ad1f2-aae7-4a4c-8258-648066d14ec9-kube-api-access-jncxv\") pod \"neutron-db-create-5kr5x\" (UID: \"470ad1f2-aae7-4a4c-8258-648066d14ec9\") " pod="openstack/neutron-db-create-5kr5x" Jan 31 04:04:52 crc kubenswrapper[4827]: I0131 04:04:52.054449 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xmrgk" Jan 31 04:04:52 crc kubenswrapper[4827]: I0131 04:04:52.068936 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d351-account-create-update-jnlqc" Jan 31 04:04:52 crc kubenswrapper[4827]: I0131 04:04:52.074117 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6wvg\" (UniqueName: \"kubernetes.io/projected/1f0110cb-d427-4d1b-a2d1-551270a63093-kube-api-access-h6wvg\") pod \"neutron-3471-account-create-update-g5xhk\" (UID: \"1f0110cb-d427-4d1b-a2d1-551270a63093\") " pod="openstack/neutron-3471-account-create-update-g5xhk" Jan 31 04:04:52 crc kubenswrapper[4827]: I0131 04:04:52.074178 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f0110cb-d427-4d1b-a2d1-551270a63093-operator-scripts\") pod \"neutron-3471-account-create-update-g5xhk\" (UID: \"1f0110cb-d427-4d1b-a2d1-551270a63093\") " pod="openstack/neutron-3471-account-create-update-g5xhk" Jan 31 04:04:52 crc kubenswrapper[4827]: I0131 04:04:52.074860 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f0110cb-d427-4d1b-a2d1-551270a63093-operator-scripts\") pod \"neutron-3471-account-create-update-g5xhk\" (UID: \"1f0110cb-d427-4d1b-a2d1-551270a63093\") " pod="openstack/neutron-3471-account-create-update-g5xhk" Jan 31 04:04:52 crc kubenswrapper[4827]: I0131 04:04:52.089470 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6wvg\" (UniqueName: \"kubernetes.io/projected/1f0110cb-d427-4d1b-a2d1-551270a63093-kube-api-access-h6wvg\") pod \"neutron-3471-account-create-update-g5xhk\" (UID: \"1f0110cb-d427-4d1b-a2d1-551270a63093\") " pod="openstack/neutron-3471-account-create-update-g5xhk" Jan 31 04:04:52 crc kubenswrapper[4827]: I0131 04:04:52.101580 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5kr5x" Jan 31 04:04:52 crc kubenswrapper[4827]: I0131 04:04:52.241654 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3471-account-create-update-g5xhk" Jan 31 04:04:52 crc kubenswrapper[4827]: I0131 04:04:52.245802 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-sgdlb"] Jan 31 04:04:52 crc kubenswrapper[4827]: I0131 04:04:52.383980 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-c5fwt"] Jan 31 04:04:52 crc kubenswrapper[4827]: W0131 04:04:52.394401 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9330fdf_710f_4fff_b064_d5ab07f73cb2.slice/crio-31ccf664659ca7f9bbc17ee7c03965d13399fce3bdba5ba9e2383dc925a8eb72 WatchSource:0}: Error finding container 31ccf664659ca7f9bbc17ee7c03965d13399fce3bdba5ba9e2383dc925a8eb72: Status 404 returned error can't find the container with id 31ccf664659ca7f9bbc17ee7c03965d13399fce3bdba5ba9e2383dc925a8eb72 Jan 31 04:04:52 crc kubenswrapper[4827]: I0131 04:04:52.420815 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e795-account-create-update-6vlcb"] Jan 31 04:04:52 crc kubenswrapper[4827]: W0131 04:04:52.427934 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cba650c_b1cb_43e0_b831_d2289e50036f.slice/crio-59bb480793a73db85123aef5fb7815d0833e21c13afa5d179f7e67925d23c59b WatchSource:0}: Error finding container 59bb480793a73db85123aef5fb7815d0833e21c13afa5d179f7e67925d23c59b: Status 404 returned error can't find the container with id 59bb480793a73db85123aef5fb7815d0833e21c13afa5d179f7e67925d23c59b Jan 31 04:04:52 crc kubenswrapper[4827]: I0131 04:04:52.586723 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-xmrgk"] Jan 31 04:04:52 crc kubenswrapper[4827]: I0131 04:04:52.658670 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5kr5x"] Jan 31 04:04:52 crc kubenswrapper[4827]: I0131 04:04:52.682746 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d351-account-create-update-jnlqc"] Jan 31 04:04:52 crc kubenswrapper[4827]: W0131 04:04:52.702119 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c08b19d_c1e1_4d21_ad8a_03207b8ac8c2.slice/crio-0b6c5e7476e1b4d8a302942faa5582803ee1e2f881cab3c518269cd79b3ce6e3 WatchSource:0}: Error finding container 0b6c5e7476e1b4d8a302942faa5582803ee1e2f881cab3c518269cd79b3ce6e3: Status 404 returned error can't find the container with id 0b6c5e7476e1b4d8a302942faa5582803ee1e2f881cab3c518269cd79b3ce6e3 Jan 31 04:04:52 crc kubenswrapper[4827]: I0131 04:04:52.858979 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3471-account-create-update-g5xhk"] Jan 31 04:04:52 crc kubenswrapper[4827]: I0131 04:04:52.882112 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" Jan 31 04:04:52 crc kubenswrapper[4827]: I0131 04:04:52.936927 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wzx4f"] Jan 31 04:04:52 crc kubenswrapper[4827]: I0131 04:04:52.937149 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" podUID="7ff119c1-4543-4413-94df-a2cf5ca523d5" containerName="dnsmasq-dns" containerID="cri-o://6df7da264918a0e3d7e1508ecf6ea25c7ba35e6162996f3075ed63e3d577a1a1" gracePeriod=10 Jan 31 04:04:52 crc kubenswrapper[4827]: W0131 04:04:52.989855 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f0110cb_d427_4d1b_a2d1_551270a63093.slice/crio-02e7819832d496f813afb055b9e9af617f4fc32e45830c749ecd7bd09033cab5 WatchSource:0}: Error finding container 02e7819832d496f813afb055b9e9af617f4fc32e45830c749ecd7bd09033cab5: Status 404 returned error can't find the container with id 02e7819832d496f813afb055b9e9af617f4fc32e45830c749ecd7bd09033cab5 Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.249252 4827 generic.go:334] "Generic (PLEG): container finished" podID="7ff119c1-4543-4413-94df-a2cf5ca523d5" containerID="6df7da264918a0e3d7e1508ecf6ea25c7ba35e6162996f3075ed63e3d577a1a1" exitCode=0 Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.249296 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" event={"ID":"7ff119c1-4543-4413-94df-a2cf5ca523d5","Type":"ContainerDied","Data":"6df7da264918a0e3d7e1508ecf6ea25c7ba35e6162996f3075ed63e3d577a1a1"} Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.251284 4827 generic.go:334] "Generic (PLEG): container finished" podID="afb22e97-f599-49e5-8cde-ddb7bb682dd4" containerID="01277b2c363bd9580fdd4a81bb6cd090fd028e4a20da528f6792ae0f63637624" exitCode=0 Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.251373 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sgdlb" event={"ID":"afb22e97-f599-49e5-8cde-ddb7bb682dd4","Type":"ContainerDied","Data":"01277b2c363bd9580fdd4a81bb6cd090fd028e4a20da528f6792ae0f63637624"} Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.251406 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sgdlb" event={"ID":"afb22e97-f599-49e5-8cde-ddb7bb682dd4","Type":"ContainerStarted","Data":"a2fece041444f01ae7955567d29cec4cbdd3221d84fb3c6c3931ceab7852fb7b"} Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.253068 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xmrgk" event={"ID":"75ab4449-6cdb-4b40-a0f0-432667f4ca97","Type":"ContainerStarted","Data":"8f02be205e8496a2a160e496b43b89d583cc638cfdf855576abfebcf74b3521c"} Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.254237 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d351-account-create-update-jnlqc" event={"ID":"1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2","Type":"ContainerStarted","Data":"a48bebd4f582bea4e2f90c202c9b2cd36a43f399bb577e4ed5e03a3125500f04"} Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.254261 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d351-account-create-update-jnlqc" event={"ID":"1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2","Type":"ContainerStarted","Data":"0b6c5e7476e1b4d8a302942faa5582803ee1e2f881cab3c518269cd79b3ce6e3"} Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.256038 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5kr5x" event={"ID":"470ad1f2-aae7-4a4c-8258-648066d14ec9","Type":"ContainerStarted","Data":"5ecdd62caa924e4695a271469e1bfa0aa564f12c5446805dea04bf66dc454e54"} Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.256073 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5kr5x" event={"ID":"470ad1f2-aae7-4a4c-8258-648066d14ec9","Type":"ContainerStarted","Data":"e98c266729a749dc67f92bc5119107879f7a703c209a566155053cbb28b60417"} Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.257306 4827 generic.go:334] "Generic (PLEG): container finished" podID="8cba650c-b1cb-43e0-b831-d2289e50036f" containerID="020c483397b073b236c826b3b98d8cb8fabdc81507992175547b9368210475c1" exitCode=0 Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.257347 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e795-account-create-update-6vlcb" event={"ID":"8cba650c-b1cb-43e0-b831-d2289e50036f","Type":"ContainerDied","Data":"020c483397b073b236c826b3b98d8cb8fabdc81507992175547b9368210475c1"} Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.257363 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e795-account-create-update-6vlcb" event={"ID":"8cba650c-b1cb-43e0-b831-d2289e50036f","Type":"ContainerStarted","Data":"59bb480793a73db85123aef5fb7815d0833e21c13afa5d179f7e67925d23c59b"} Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.258285 4827 generic.go:334] "Generic (PLEG): container finished" podID="b9330fdf-710f-4fff-b064-d5ab07f73cb2" containerID="230da5d0e72eb6429b9cc8e891da0be6ab3c52cf56690b6e4548caa8c9748259" exitCode=0 Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.258317 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-c5fwt" event={"ID":"b9330fdf-710f-4fff-b064-d5ab07f73cb2","Type":"ContainerDied","Data":"230da5d0e72eb6429b9cc8e891da0be6ab3c52cf56690b6e4548caa8c9748259"} Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.258330 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-c5fwt" event={"ID":"b9330fdf-710f-4fff-b064-d5ab07f73cb2","Type":"ContainerStarted","Data":"31ccf664659ca7f9bbc17ee7c03965d13399fce3bdba5ba9e2383dc925a8eb72"} Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.259044 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3471-account-create-update-g5xhk" event={"ID":"1f0110cb-d427-4d1b-a2d1-551270a63093","Type":"ContainerStarted","Data":"02e7819832d496f813afb055b9e9af617f4fc32e45830c749ecd7bd09033cab5"} Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.422806 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-5kr5x" podStartSLOduration=2.42279291 podStartE2EDuration="2.42279291s" podCreationTimestamp="2026-01-31 04:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:04:53.421601503 +0000 UTC m=+1086.108681962" watchObservedRunningTime="2026-01-31 04:04:53.42279291 +0000 UTC m=+1086.109873349" Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.466629 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-d351-account-create-update-jnlqc" podStartSLOduration=2.466608263 podStartE2EDuration="2.466608263s" podCreationTimestamp="2026-01-31 04:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:04:53.449898891 +0000 UTC m=+1086.136979340" watchObservedRunningTime="2026-01-31 04:04:53.466608263 +0000 UTC m=+1086.153688722" Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.629503 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.722915 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-ovsdbserver-nb\") pod \"7ff119c1-4543-4413-94df-a2cf5ca523d5\" (UID: \"7ff119c1-4543-4413-94df-a2cf5ca523d5\") " Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.722977 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-ovsdbserver-sb\") pod \"7ff119c1-4543-4413-94df-a2cf5ca523d5\" (UID: \"7ff119c1-4543-4413-94df-a2cf5ca523d5\") " Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.723009 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-config\") pod \"7ff119c1-4543-4413-94df-a2cf5ca523d5\" (UID: \"7ff119c1-4543-4413-94df-a2cf5ca523d5\") " Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.723052 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-dns-svc\") pod \"7ff119c1-4543-4413-94df-a2cf5ca523d5\" (UID: \"7ff119c1-4543-4413-94df-a2cf5ca523d5\") " Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.723101 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqpz5\" (UniqueName: \"kubernetes.io/projected/7ff119c1-4543-4413-94df-a2cf5ca523d5-kube-api-access-rqpz5\") pod \"7ff119c1-4543-4413-94df-a2cf5ca523d5\" (UID: \"7ff119c1-4543-4413-94df-a2cf5ca523d5\") " Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.748261 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff119c1-4543-4413-94df-a2cf5ca523d5-kube-api-access-rqpz5" (OuterVolumeSpecName: "kube-api-access-rqpz5") pod "7ff119c1-4543-4413-94df-a2cf5ca523d5" (UID: "7ff119c1-4543-4413-94df-a2cf5ca523d5"). InnerVolumeSpecName "kube-api-access-rqpz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.766553 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-config" (OuterVolumeSpecName: "config") pod "7ff119c1-4543-4413-94df-a2cf5ca523d5" (UID: "7ff119c1-4543-4413-94df-a2cf5ca523d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.782264 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7ff119c1-4543-4413-94df-a2cf5ca523d5" (UID: "7ff119c1-4543-4413-94df-a2cf5ca523d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.783288 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7ff119c1-4543-4413-94df-a2cf5ca523d5" (UID: "7ff119c1-4543-4413-94df-a2cf5ca523d5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.787353 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7ff119c1-4543-4413-94df-a2cf5ca523d5" (UID: "7ff119c1-4543-4413-94df-a2cf5ca523d5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.825262 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.825297 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.825307 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.825319 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ff119c1-4543-4413-94df-a2cf5ca523d5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:53 crc kubenswrapper[4827]: I0131 04:04:53.825329 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqpz5\" (UniqueName: \"kubernetes.io/projected/7ff119c1-4543-4413-94df-a2cf5ca523d5-kube-api-access-rqpz5\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.265844 4827 generic.go:334] "Generic (PLEG): container finished" podID="1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2" containerID="a48bebd4f582bea4e2f90c202c9b2cd36a43f399bb577e4ed5e03a3125500f04" exitCode=0 Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.265940 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d351-account-create-update-jnlqc" event={"ID":"1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2","Type":"ContainerDied","Data":"a48bebd4f582bea4e2f90c202c9b2cd36a43f399bb577e4ed5e03a3125500f04"} Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.267403 4827 generic.go:334] "Generic (PLEG): container finished" podID="470ad1f2-aae7-4a4c-8258-648066d14ec9" containerID="5ecdd62caa924e4695a271469e1bfa0aa564f12c5446805dea04bf66dc454e54" exitCode=0 Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.267452 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5kr5x" event={"ID":"470ad1f2-aae7-4a4c-8258-648066d14ec9","Type":"ContainerDied","Data":"5ecdd62caa924e4695a271469e1bfa0aa564f12c5446805dea04bf66dc454e54"} Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.268936 4827 generic.go:334] "Generic (PLEG): container finished" podID="1f0110cb-d427-4d1b-a2d1-551270a63093" containerID="88885a6baf2794c005ccf9af116f6577e1c6d37522aef85b4eca6b3d3f10179d" exitCode=0 Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.268988 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3471-account-create-update-g5xhk" event={"ID":"1f0110cb-d427-4d1b-a2d1-551270a63093","Type":"ContainerDied","Data":"88885a6baf2794c005ccf9af116f6577e1c6d37522aef85b4eca6b3d3f10179d"} Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.272669 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" event={"ID":"7ff119c1-4543-4413-94df-a2cf5ca523d5","Type":"ContainerDied","Data":"df737f0210576a98463d31ea497083dfc0c94a4489288b6a61467b9b29d0ee5d"} Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.272717 4827 scope.go:117] "RemoveContainer" containerID="6df7da264918a0e3d7e1508ecf6ea25c7ba35e6162996f3075ed63e3d577a1a1" Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.272830 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-wzx4f" Jan 31 04:04:54 crc kubenswrapper[4827]: E0131 04:04:54.283508 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ff119c1_4543_4413_94df_a2cf5ca523d5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ff119c1_4543_4413_94df_a2cf5ca523d5.slice/crio-df737f0210576a98463d31ea497083dfc0c94a4489288b6a61467b9b29d0ee5d\": RecentStats: unable to find data in memory cache]" Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.304430 4827 scope.go:117] "RemoveContainer" containerID="457857ebe4d516c4060bb88882652c2ff255647be0eb2aeb02649a6af506c702" Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.341988 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wzx4f"] Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.347074 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wzx4f"] Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.706004 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e795-account-create-update-6vlcb" Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.748507 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cba650c-b1cb-43e0-b831-d2289e50036f-operator-scripts\") pod \"8cba650c-b1cb-43e0-b831-d2289e50036f\" (UID: \"8cba650c-b1cb-43e0-b831-d2289e50036f\") " Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.748630 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7nxx\" (UniqueName: \"kubernetes.io/projected/8cba650c-b1cb-43e0-b831-d2289e50036f-kube-api-access-h7nxx\") pod \"8cba650c-b1cb-43e0-b831-d2289e50036f\" (UID: \"8cba650c-b1cb-43e0-b831-d2289e50036f\") " Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.750476 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cba650c-b1cb-43e0-b831-d2289e50036f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8cba650c-b1cb-43e0-b831-d2289e50036f" (UID: "8cba650c-b1cb-43e0-b831-d2289e50036f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.753502 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cba650c-b1cb-43e0-b831-d2289e50036f-kube-api-access-h7nxx" (OuterVolumeSpecName: "kube-api-access-h7nxx") pod "8cba650c-b1cb-43e0-b831-d2289e50036f" (UID: "8cba650c-b1cb-43e0-b831-d2289e50036f"). InnerVolumeSpecName "kube-api-access-h7nxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.826283 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-c5fwt" Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.835014 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sgdlb" Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.850747 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cba650c-b1cb-43e0-b831-d2289e50036f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.850789 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7nxx\" (UniqueName: \"kubernetes.io/projected/8cba650c-b1cb-43e0-b831-d2289e50036f-kube-api-access-h7nxx\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.952199 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb22e97-f599-49e5-8cde-ddb7bb682dd4-operator-scripts\") pod \"afb22e97-f599-49e5-8cde-ddb7bb682dd4\" (UID: \"afb22e97-f599-49e5-8cde-ddb7bb682dd4\") " Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.952339 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kxjb\" (UniqueName: \"kubernetes.io/projected/b9330fdf-710f-4fff-b064-d5ab07f73cb2-kube-api-access-7kxjb\") pod \"b9330fdf-710f-4fff-b064-d5ab07f73cb2\" (UID: \"b9330fdf-710f-4fff-b064-d5ab07f73cb2\") " Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.952386 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9330fdf-710f-4fff-b064-d5ab07f73cb2-operator-scripts\") pod \"b9330fdf-710f-4fff-b064-d5ab07f73cb2\" (UID: \"b9330fdf-710f-4fff-b064-d5ab07f73cb2\") " Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.952501 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-848rs\" (UniqueName: \"kubernetes.io/projected/afb22e97-f599-49e5-8cde-ddb7bb682dd4-kube-api-access-848rs\") pod \"afb22e97-f599-49e5-8cde-ddb7bb682dd4\" (UID: \"afb22e97-f599-49e5-8cde-ddb7bb682dd4\") " Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.952632 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afb22e97-f599-49e5-8cde-ddb7bb682dd4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "afb22e97-f599-49e5-8cde-ddb7bb682dd4" (UID: "afb22e97-f599-49e5-8cde-ddb7bb682dd4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.952860 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb22e97-f599-49e5-8cde-ddb7bb682dd4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.952988 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9330fdf-710f-4fff-b064-d5ab07f73cb2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b9330fdf-710f-4fff-b064-d5ab07f73cb2" (UID: "b9330fdf-710f-4fff-b064-d5ab07f73cb2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.955713 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9330fdf-710f-4fff-b064-d5ab07f73cb2-kube-api-access-7kxjb" (OuterVolumeSpecName: "kube-api-access-7kxjb") pod "b9330fdf-710f-4fff-b064-d5ab07f73cb2" (UID: "b9330fdf-710f-4fff-b064-d5ab07f73cb2"). InnerVolumeSpecName "kube-api-access-7kxjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:04:54 crc kubenswrapper[4827]: I0131 04:04:54.957036 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afb22e97-f599-49e5-8cde-ddb7bb682dd4-kube-api-access-848rs" (OuterVolumeSpecName: "kube-api-access-848rs") pod "afb22e97-f599-49e5-8cde-ddb7bb682dd4" (UID: "afb22e97-f599-49e5-8cde-ddb7bb682dd4"). InnerVolumeSpecName "kube-api-access-848rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:04:55 crc kubenswrapper[4827]: I0131 04:04:55.054909 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-848rs\" (UniqueName: \"kubernetes.io/projected/afb22e97-f599-49e5-8cde-ddb7bb682dd4-kube-api-access-848rs\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:55 crc kubenswrapper[4827]: I0131 04:04:55.054936 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kxjb\" (UniqueName: \"kubernetes.io/projected/b9330fdf-710f-4fff-b064-d5ab07f73cb2-kube-api-access-7kxjb\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:55 crc kubenswrapper[4827]: I0131 04:04:55.054948 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9330fdf-710f-4fff-b064-d5ab07f73cb2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:55 crc kubenswrapper[4827]: I0131 04:04:55.289633 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-c5fwt" event={"ID":"b9330fdf-710f-4fff-b064-d5ab07f73cb2","Type":"ContainerDied","Data":"31ccf664659ca7f9bbc17ee7c03965d13399fce3bdba5ba9e2383dc925a8eb72"} Jan 31 04:04:55 crc kubenswrapper[4827]: I0131 04:04:55.290001 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31ccf664659ca7f9bbc17ee7c03965d13399fce3bdba5ba9e2383dc925a8eb72" Jan 31 04:04:55 crc kubenswrapper[4827]: I0131 04:04:55.290104 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-c5fwt" Jan 31 04:04:55 crc kubenswrapper[4827]: I0131 04:04:55.292600 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sgdlb" event={"ID":"afb22e97-f599-49e5-8cde-ddb7bb682dd4","Type":"ContainerDied","Data":"a2fece041444f01ae7955567d29cec4cbdd3221d84fb3c6c3931ceab7852fb7b"} Jan 31 04:04:55 crc kubenswrapper[4827]: I0131 04:04:55.292626 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2fece041444f01ae7955567d29cec4cbdd3221d84fb3c6c3931ceab7852fb7b" Jan 31 04:04:55 crc kubenswrapper[4827]: I0131 04:04:55.292673 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sgdlb" Jan 31 04:04:55 crc kubenswrapper[4827]: I0131 04:04:55.298084 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e795-account-create-update-6vlcb" Jan 31 04:04:55 crc kubenswrapper[4827]: I0131 04:04:55.300277 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e795-account-create-update-6vlcb" event={"ID":"8cba650c-b1cb-43e0-b831-d2289e50036f","Type":"ContainerDied","Data":"59bb480793a73db85123aef5fb7815d0833e21c13afa5d179f7e67925d23c59b"} Jan 31 04:04:55 crc kubenswrapper[4827]: I0131 04:04:55.300324 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59bb480793a73db85123aef5fb7815d0833e21c13afa5d179f7e67925d23c59b" Jan 31 04:04:56 crc kubenswrapper[4827]: I0131 04:04:56.121852 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff119c1-4543-4413-94df-a2cf5ca523d5" path="/var/lib/kubelet/pods/7ff119c1-4543-4413-94df-a2cf5ca523d5/volumes" Jan 31 04:04:57 crc kubenswrapper[4827]: I0131 04:04:57.951727 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d351-account-create-update-jnlqc" Jan 31 04:04:57 crc kubenswrapper[4827]: I0131 04:04:57.992536 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5kr5x" Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.011790 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tl5k\" (UniqueName: \"kubernetes.io/projected/1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2-kube-api-access-5tl5k\") pod \"1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2\" (UID: \"1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2\") " Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.011904 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2-operator-scripts\") pod \"1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2\" (UID: \"1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2\") " Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.014134 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2" (UID: "1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.015634 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3471-account-create-update-g5xhk" Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.016126 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2-kube-api-access-5tl5k" (OuterVolumeSpecName: "kube-api-access-5tl5k") pod "1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2" (UID: "1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2"). InnerVolumeSpecName "kube-api-access-5tl5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.113852 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jncxv\" (UniqueName: \"kubernetes.io/projected/470ad1f2-aae7-4a4c-8258-648066d14ec9-kube-api-access-jncxv\") pod \"470ad1f2-aae7-4a4c-8258-648066d14ec9\" (UID: \"470ad1f2-aae7-4a4c-8258-648066d14ec9\") " Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.113966 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f0110cb-d427-4d1b-a2d1-551270a63093-operator-scripts\") pod \"1f0110cb-d427-4d1b-a2d1-551270a63093\" (UID: \"1f0110cb-d427-4d1b-a2d1-551270a63093\") " Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.114004 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6wvg\" (UniqueName: \"kubernetes.io/projected/1f0110cb-d427-4d1b-a2d1-551270a63093-kube-api-access-h6wvg\") pod \"1f0110cb-d427-4d1b-a2d1-551270a63093\" (UID: \"1f0110cb-d427-4d1b-a2d1-551270a63093\") " Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.114039 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/470ad1f2-aae7-4a4c-8258-648066d14ec9-operator-scripts\") pod \"470ad1f2-aae7-4a4c-8258-648066d14ec9\" (UID: \"470ad1f2-aae7-4a4c-8258-648066d14ec9\") " Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.114704 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tl5k\" (UniqueName: \"kubernetes.io/projected/1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2-kube-api-access-5tl5k\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.114737 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.114743 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f0110cb-d427-4d1b-a2d1-551270a63093-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f0110cb-d427-4d1b-a2d1-551270a63093" (UID: "1f0110cb-d427-4d1b-a2d1-551270a63093"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.114836 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/470ad1f2-aae7-4a4c-8258-648066d14ec9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "470ad1f2-aae7-4a4c-8258-648066d14ec9" (UID: "470ad1f2-aae7-4a4c-8258-648066d14ec9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.117995 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/470ad1f2-aae7-4a4c-8258-648066d14ec9-kube-api-access-jncxv" (OuterVolumeSpecName: "kube-api-access-jncxv") pod "470ad1f2-aae7-4a4c-8258-648066d14ec9" (UID: "470ad1f2-aae7-4a4c-8258-648066d14ec9"). InnerVolumeSpecName "kube-api-access-jncxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.119204 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f0110cb-d427-4d1b-a2d1-551270a63093-kube-api-access-h6wvg" (OuterVolumeSpecName: "kube-api-access-h6wvg") pod "1f0110cb-d427-4d1b-a2d1-551270a63093" (UID: "1f0110cb-d427-4d1b-a2d1-551270a63093"). InnerVolumeSpecName "kube-api-access-h6wvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.216996 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jncxv\" (UniqueName: \"kubernetes.io/projected/470ad1f2-aae7-4a4c-8258-648066d14ec9-kube-api-access-jncxv\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.217040 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f0110cb-d427-4d1b-a2d1-551270a63093-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.217067 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6wvg\" (UniqueName: \"kubernetes.io/projected/1f0110cb-d427-4d1b-a2d1-551270a63093-kube-api-access-h6wvg\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.217092 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/470ad1f2-aae7-4a4c-8258-648066d14ec9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.325341 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d351-account-create-update-jnlqc" event={"ID":"1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2","Type":"ContainerDied","Data":"0b6c5e7476e1b4d8a302942faa5582803ee1e2f881cab3c518269cd79b3ce6e3"} Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.325453 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b6c5e7476e1b4d8a302942faa5582803ee1e2f881cab3c518269cd79b3ce6e3" Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.325357 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d351-account-create-update-jnlqc" Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.330817 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5kr5x" event={"ID":"470ad1f2-aae7-4a4c-8258-648066d14ec9","Type":"ContainerDied","Data":"e98c266729a749dc67f92bc5119107879f7a703c209a566155053cbb28b60417"} Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.330939 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e98c266729a749dc67f92bc5119107879f7a703c209a566155053cbb28b60417" Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.330967 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5kr5x" Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.333058 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3471-account-create-update-g5xhk" event={"ID":"1f0110cb-d427-4d1b-a2d1-551270a63093","Type":"ContainerDied","Data":"02e7819832d496f813afb055b9e9af617f4fc32e45830c749ecd7bd09033cab5"} Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.333100 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02e7819832d496f813afb055b9e9af617f4fc32e45830c749ecd7bd09033cab5" Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.333136 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3471-account-create-update-g5xhk" Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.335484 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xmrgk" event={"ID":"75ab4449-6cdb-4b40-a0f0-432667f4ca97","Type":"ContainerStarted","Data":"53c3ff3ff458c19a130e4530e618a2d88b266a4427db74ff67bff15d8710de91"} Jan 31 04:04:58 crc kubenswrapper[4827]: I0131 04:04:58.356737 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-xmrgk" podStartSLOduration=2.165377476 podStartE2EDuration="7.356652498s" podCreationTimestamp="2026-01-31 04:04:51 +0000 UTC" firstStartedPulling="2026-01-31 04:04:52.605459196 +0000 UTC m=+1085.292539645" lastFinishedPulling="2026-01-31 04:04:57.796734218 +0000 UTC m=+1090.483814667" observedRunningTime="2026-01-31 04:04:58.352977486 +0000 UTC m=+1091.040057995" watchObservedRunningTime="2026-01-31 04:04:58.356652498 +0000 UTC m=+1091.043732967" Jan 31 04:05:03 crc kubenswrapper[4827]: I0131 04:05:03.380129 4827 generic.go:334] "Generic (PLEG): container finished" podID="75ab4449-6cdb-4b40-a0f0-432667f4ca97" containerID="53c3ff3ff458c19a130e4530e618a2d88b266a4427db74ff67bff15d8710de91" exitCode=0 Jan 31 04:05:03 crc kubenswrapper[4827]: I0131 04:05:03.380269 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xmrgk" event={"ID":"75ab4449-6cdb-4b40-a0f0-432667f4ca97","Type":"ContainerDied","Data":"53c3ff3ff458c19a130e4530e618a2d88b266a4427db74ff67bff15d8710de91"} Jan 31 04:05:04 crc kubenswrapper[4827]: I0131 04:05:04.769716 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xmrgk" Jan 31 04:05:04 crc kubenswrapper[4827]: I0131 04:05:04.841563 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75ab4449-6cdb-4b40-a0f0-432667f4ca97-config-data\") pod \"75ab4449-6cdb-4b40-a0f0-432667f4ca97\" (UID: \"75ab4449-6cdb-4b40-a0f0-432667f4ca97\") " Jan 31 04:05:04 crc kubenswrapper[4827]: I0131 04:05:04.841605 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjqfn\" (UniqueName: \"kubernetes.io/projected/75ab4449-6cdb-4b40-a0f0-432667f4ca97-kube-api-access-tjqfn\") pod \"75ab4449-6cdb-4b40-a0f0-432667f4ca97\" (UID: \"75ab4449-6cdb-4b40-a0f0-432667f4ca97\") " Jan 31 04:05:04 crc kubenswrapper[4827]: I0131 04:05:04.841740 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ab4449-6cdb-4b40-a0f0-432667f4ca97-combined-ca-bundle\") pod \"75ab4449-6cdb-4b40-a0f0-432667f4ca97\" (UID: \"75ab4449-6cdb-4b40-a0f0-432667f4ca97\") " Jan 31 04:05:04 crc kubenswrapper[4827]: I0131 04:05:04.846876 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75ab4449-6cdb-4b40-a0f0-432667f4ca97-kube-api-access-tjqfn" (OuterVolumeSpecName: "kube-api-access-tjqfn") pod "75ab4449-6cdb-4b40-a0f0-432667f4ca97" (UID: "75ab4449-6cdb-4b40-a0f0-432667f4ca97"). InnerVolumeSpecName "kube-api-access-tjqfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:04 crc kubenswrapper[4827]: I0131 04:05:04.877718 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75ab4449-6cdb-4b40-a0f0-432667f4ca97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75ab4449-6cdb-4b40-a0f0-432667f4ca97" (UID: "75ab4449-6cdb-4b40-a0f0-432667f4ca97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:04 crc kubenswrapper[4827]: I0131 04:05:04.879677 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75ab4449-6cdb-4b40-a0f0-432667f4ca97-config-data" (OuterVolumeSpecName: "config-data") pod "75ab4449-6cdb-4b40-a0f0-432667f4ca97" (UID: "75ab4449-6cdb-4b40-a0f0-432667f4ca97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:04 crc kubenswrapper[4827]: I0131 04:05:04.943502 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ab4449-6cdb-4b40-a0f0-432667f4ca97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:04 crc kubenswrapper[4827]: I0131 04:05:04.943529 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75ab4449-6cdb-4b40-a0f0-432667f4ca97-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:04 crc kubenswrapper[4827]: I0131 04:05:04.943539 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjqfn\" (UniqueName: \"kubernetes.io/projected/75ab4449-6cdb-4b40-a0f0-432667f4ca97-kube-api-access-tjqfn\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.403416 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xmrgk" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.403455 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xmrgk" event={"ID":"75ab4449-6cdb-4b40-a0f0-432667f4ca97","Type":"ContainerDied","Data":"8f02be205e8496a2a160e496b43b89d583cc638cfdf855576abfebcf74b3521c"} Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.403565 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f02be205e8496a2a160e496b43b89d583cc638cfdf855576abfebcf74b3521c" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.749802 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-hq5pm"] Jan 31 04:05:05 crc kubenswrapper[4827]: E0131 04:05:05.750117 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2" containerName="mariadb-account-create-update" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.750129 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2" containerName="mariadb-account-create-update" Jan 31 04:05:05 crc kubenswrapper[4827]: E0131 04:05:05.750140 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff119c1-4543-4413-94df-a2cf5ca523d5" containerName="dnsmasq-dns" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.750145 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff119c1-4543-4413-94df-a2cf5ca523d5" containerName="dnsmasq-dns" Jan 31 04:05:05 crc kubenswrapper[4827]: E0131 04:05:05.750161 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cba650c-b1cb-43e0-b831-d2289e50036f" containerName="mariadb-account-create-update" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.750167 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cba650c-b1cb-43e0-b831-d2289e50036f" containerName="mariadb-account-create-update" Jan 31 04:05:05 crc kubenswrapper[4827]: E0131 04:05:05.750181 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0110cb-d427-4d1b-a2d1-551270a63093" containerName="mariadb-account-create-update" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.750187 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0110cb-d427-4d1b-a2d1-551270a63093" containerName="mariadb-account-create-update" Jan 31 04:05:05 crc kubenswrapper[4827]: E0131 04:05:05.750202 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9330fdf-710f-4fff-b064-d5ab07f73cb2" containerName="mariadb-database-create" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.750208 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9330fdf-710f-4fff-b064-d5ab07f73cb2" containerName="mariadb-database-create" Jan 31 04:05:05 crc kubenswrapper[4827]: E0131 04:05:05.750220 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470ad1f2-aae7-4a4c-8258-648066d14ec9" containerName="mariadb-database-create" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.750225 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="470ad1f2-aae7-4a4c-8258-648066d14ec9" containerName="mariadb-database-create" Jan 31 04:05:05 crc kubenswrapper[4827]: E0131 04:05:05.750236 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75ab4449-6cdb-4b40-a0f0-432667f4ca97" containerName="keystone-db-sync" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.750241 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="75ab4449-6cdb-4b40-a0f0-432667f4ca97" containerName="keystone-db-sync" Jan 31 04:05:05 crc kubenswrapper[4827]: E0131 04:05:05.750251 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff119c1-4543-4413-94df-a2cf5ca523d5" containerName="init" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.750256 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff119c1-4543-4413-94df-a2cf5ca523d5" containerName="init" Jan 31 04:05:05 crc kubenswrapper[4827]: E0131 04:05:05.750267 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afb22e97-f599-49e5-8cde-ddb7bb682dd4" containerName="mariadb-database-create" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.750274 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb22e97-f599-49e5-8cde-ddb7bb682dd4" containerName="mariadb-database-create" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.750404 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f0110cb-d427-4d1b-a2d1-551270a63093" containerName="mariadb-account-create-update" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.750413 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff119c1-4543-4413-94df-a2cf5ca523d5" containerName="dnsmasq-dns" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.750423 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cba650c-b1cb-43e0-b831-d2289e50036f" containerName="mariadb-account-create-update" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.750435 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9330fdf-710f-4fff-b064-d5ab07f73cb2" containerName="mariadb-database-create" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.750444 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="afb22e97-f599-49e5-8cde-ddb7bb682dd4" containerName="mariadb-database-create" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.750468 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="470ad1f2-aae7-4a4c-8258-648066d14ec9" containerName="mariadb-database-create" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.750480 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="75ab4449-6cdb-4b40-a0f0-432667f4ca97" containerName="keystone-db-sync" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.750489 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2" containerName="mariadb-account-create-update" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.751443 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.768626 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-5pz6x"] Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.769593 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.780223 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5pz6x"] Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.785140 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.785334 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.787028 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.795208 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6bbzt" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.795376 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.797235 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-hq5pm"] Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.863382 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-config-data\") pod \"keystone-bootstrap-5pz6x\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.863425 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-config\") pod \"dnsmasq-dns-6546db6db7-hq5pm\" (UID: \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\") " pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.863453 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhgpf\" (UniqueName: \"kubernetes.io/projected/77bd3d45-e656-4e13-94c5-1ef4bb703af6-kube-api-access-fhgpf\") pod \"keystone-bootstrap-5pz6x\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.863479 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-dns-svc\") pod \"dnsmasq-dns-6546db6db7-hq5pm\" (UID: \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\") " pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.863499 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-scripts\") pod \"keystone-bootstrap-5pz6x\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.863515 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-hq5pm\" (UID: \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\") " pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.863538 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-hq5pm\" (UID: \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\") " pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.863652 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-fernet-keys\") pod \"keystone-bootstrap-5pz6x\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.863670 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxvzh\" (UniqueName: \"kubernetes.io/projected/e20183b9-5cab-49ce-96b1-fa763a4a4d60-kube-api-access-xxvzh\") pod \"dnsmasq-dns-6546db6db7-hq5pm\" (UID: \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\") " pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.863701 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-combined-ca-bundle\") pod \"keystone-bootstrap-5pz6x\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.863720 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-credential-keys\") pod \"keystone-bootstrap-5pz6x\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.936978 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-kk8q2"] Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.937916 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kk8q2" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.943548 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.943936 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.944767 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fkcp5" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.960476 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kk8q2"] Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.966178 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-hq5pm\" (UID: \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\") " pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.966226 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-hq5pm\" (UID: \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\") " pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.966282 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-fernet-keys\") pod \"keystone-bootstrap-5pz6x\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.966298 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxvzh\" (UniqueName: \"kubernetes.io/projected/e20183b9-5cab-49ce-96b1-fa763a4a4d60-kube-api-access-xxvzh\") pod \"dnsmasq-dns-6546db6db7-hq5pm\" (UID: \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\") " pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.966333 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-combined-ca-bundle\") pod \"keystone-bootstrap-5pz6x\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.966352 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-credential-keys\") pod \"keystone-bootstrap-5pz6x\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.966380 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-config-data\") pod \"keystone-bootstrap-5pz6x\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.966398 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-config\") pod \"dnsmasq-dns-6546db6db7-hq5pm\" (UID: \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\") " pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.966421 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhgpf\" (UniqueName: \"kubernetes.io/projected/77bd3d45-e656-4e13-94c5-1ef4bb703af6-kube-api-access-fhgpf\") pod \"keystone-bootstrap-5pz6x\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.966446 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-dns-svc\") pod \"dnsmasq-dns-6546db6db7-hq5pm\" (UID: \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\") " pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.966479 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-scripts\") pod \"keystone-bootstrap-5pz6x\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.970804 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-hq5pm\" (UID: \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\") " pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.970828 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-config\") pod \"dnsmasq-dns-6546db6db7-hq5pm\" (UID: \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\") " pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.971592 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-dns-svc\") pod \"dnsmasq-dns-6546db6db7-hq5pm\" (UID: \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\") " pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.971891 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-scripts\") pod \"keystone-bootstrap-5pz6x\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.974898 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-combined-ca-bundle\") pod \"keystone-bootstrap-5pz6x\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.975487 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-hq5pm\" (UID: \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\") " pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.977841 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-48bfh"] Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.979357 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.985215 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-fernet-keys\") pod \"keystone-bootstrap-5pz6x\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.987275 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.987460 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-j7tvl" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.987581 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.987624 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-credential-keys\") pod \"keystone-bootstrap-5pz6x\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:05 crc kubenswrapper[4827]: I0131 04:05:05.988658 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-config-data\") pod \"keystone-bootstrap-5pz6x\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.001628 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhgpf\" (UniqueName: \"kubernetes.io/projected/77bd3d45-e656-4e13-94c5-1ef4bb703af6-kube-api-access-fhgpf\") pod \"keystone-bootstrap-5pz6x\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.024635 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-48bfh"] Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.036608 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxvzh\" (UniqueName: \"kubernetes.io/projected/e20183b9-5cab-49ce-96b1-fa763a4a4d60-kube-api-access-xxvzh\") pod \"dnsmasq-dns-6546db6db7-hq5pm\" (UID: \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\") " pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.067914 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-scripts\") pod \"cinder-db-sync-48bfh\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.067951 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3034593d-68df-4223-a3d5-f1cd46f49398-combined-ca-bundle\") pod \"neutron-db-sync-kk8q2\" (UID: \"3034593d-68df-4223-a3d5-f1cd46f49398\") " pod="openstack/neutron-db-sync-kk8q2" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.067982 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-etc-machine-id\") pod \"cinder-db-sync-48bfh\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.068003 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-config-data\") pod \"cinder-db-sync-48bfh\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.068035 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd277\" (UniqueName: \"kubernetes.io/projected/3034593d-68df-4223-a3d5-f1cd46f49398-kube-api-access-nd277\") pod \"neutron-db-sync-kk8q2\" (UID: \"3034593d-68df-4223-a3d5-f1cd46f49398\") " pod="openstack/neutron-db-sync-kk8q2" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.068054 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3034593d-68df-4223-a3d5-f1cd46f49398-config\") pod \"neutron-db-sync-kk8q2\" (UID: \"3034593d-68df-4223-a3d5-f1cd46f49398\") " pod="openstack/neutron-db-sync-kk8q2" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.068071 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-db-sync-config-data\") pod \"cinder-db-sync-48bfh\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.068091 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-combined-ca-bundle\") pod \"cinder-db-sync-48bfh\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.068116 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv7s2\" (UniqueName: \"kubernetes.io/projected/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-kube-api-access-fv7s2\") pod \"cinder-db-sync-48bfh\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.073387 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-5xrmx"] Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.074293 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5xrmx" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.082229 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.082429 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-dbncs" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.083153 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.088956 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-5xrmx"] Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.093673 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-hq5pm"] Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.106390 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.174536 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.180591 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.180865 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-65cfr"] Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.187779 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.188058 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd277\" (UniqueName: \"kubernetes.io/projected/3034593d-68df-4223-a3d5-f1cd46f49398-kube-api-access-nd277\") pod \"neutron-db-sync-kk8q2\" (UID: \"3034593d-68df-4223-a3d5-f1cd46f49398\") " pod="openstack/neutron-db-sync-kk8q2" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.188121 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.188121 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdbrt\" (UniqueName: \"kubernetes.io/projected/921fb66d-0be5-4614-9974-86da117973d1-kube-api-access-kdbrt\") pod \"barbican-db-sync-5xrmx\" (UID: \"921fb66d-0be5-4614-9974-86da117973d1\") " pod="openstack/barbican-db-sync-5xrmx" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.188153 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3034593d-68df-4223-a3d5-f1cd46f49398-config\") pod \"neutron-db-sync-kk8q2\" (UID: \"3034593d-68df-4223-a3d5-f1cd46f49398\") " pod="openstack/neutron-db-sync-kk8q2" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.188177 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921fb66d-0be5-4614-9974-86da117973d1-combined-ca-bundle\") pod \"barbican-db-sync-5xrmx\" (UID: \"921fb66d-0be5-4614-9974-86da117973d1\") " pod="openstack/barbican-db-sync-5xrmx" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.188199 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-db-sync-config-data\") pod \"cinder-db-sync-48bfh\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.188247 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-combined-ca-bundle\") pod \"cinder-db-sync-48bfh\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.188296 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv7s2\" (UniqueName: \"kubernetes.io/projected/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-kube-api-access-fv7s2\") pod \"cinder-db-sync-48bfh\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.188403 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/921fb66d-0be5-4614-9974-86da117973d1-db-sync-config-data\") pod \"barbican-db-sync-5xrmx\" (UID: \"921fb66d-0be5-4614-9974-86da117973d1\") " pod="openstack/barbican-db-sync-5xrmx" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.188474 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-scripts\") pod \"cinder-db-sync-48bfh\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.188500 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3034593d-68df-4223-a3d5-f1cd46f49398-combined-ca-bundle\") pod \"neutron-db-sync-kk8q2\" (UID: \"3034593d-68df-4223-a3d5-f1cd46f49398\") " pod="openstack/neutron-db-sync-kk8q2" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.188559 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-etc-machine-id\") pod \"cinder-db-sync-48bfh\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.188614 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-config-data\") pod \"cinder-db-sync-48bfh\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.192376 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.193127 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-etc-machine-id\") pod \"cinder-db-sync-48bfh\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.194513 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3034593d-68df-4223-a3d5-f1cd46f49398-config\") pod \"neutron-db-sync-kk8q2\" (UID: \"3034593d-68df-4223-a3d5-f1cd46f49398\") " pod="openstack/neutron-db-sync-kk8q2" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.195083 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-config-data\") pod \"cinder-db-sync-48bfh\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.205552 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-scripts\") pod \"cinder-db-sync-48bfh\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.206128 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3034593d-68df-4223-a3d5-f1cd46f49398-combined-ca-bundle\") pod \"neutron-db-sync-kk8q2\" (UID: \"3034593d-68df-4223-a3d5-f1cd46f49398\") " pod="openstack/neutron-db-sync-kk8q2" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.207448 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-combined-ca-bundle\") pod \"cinder-db-sync-48bfh\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.214714 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-db-sync-config-data\") pod \"cinder-db-sync-48bfh\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.215338 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv7s2\" (UniqueName: \"kubernetes.io/projected/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-kube-api-access-fv7s2\") pod \"cinder-db-sync-48bfh\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.229726 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd277\" (UniqueName: \"kubernetes.io/projected/3034593d-68df-4223-a3d5-f1cd46f49398-kube-api-access-nd277\") pod \"neutron-db-sync-kk8q2\" (UID: \"3034593d-68df-4223-a3d5-f1cd46f49398\") " pod="openstack/neutron-db-sync-kk8q2" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.259923 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.264341 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kk8q2" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.290269 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/921fb66d-0be5-4614-9974-86da117973d1-db-sync-config-data\") pod \"barbican-db-sync-5xrmx\" (UID: \"921fb66d-0be5-4614-9974-86da117973d1\") " pod="openstack/barbican-db-sync-5xrmx" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.290326 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpp94\" (UniqueName: \"kubernetes.io/projected/74d7118a-77ce-4f65-b0c3-a28c70623d2d-kube-api-access-lpp94\") pod \"dnsmasq-dns-7987f74bbc-65cfr\" (UID: \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\") " pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.290373 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-run-httpd\") pod \"ceilometer-0\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.290407 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-config-data\") pod \"ceilometer-0\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.290425 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.290441 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-config\") pod \"dnsmasq-dns-7987f74bbc-65cfr\" (UID: \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\") " pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.290474 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.290496 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-65cfr\" (UID: \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\") " pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.290511 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-scripts\") pod \"ceilometer-0\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.290528 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-log-httpd\") pod \"ceilometer-0\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.290649 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdbrt\" (UniqueName: \"kubernetes.io/projected/921fb66d-0be5-4614-9974-86da117973d1-kube-api-access-kdbrt\") pod \"barbican-db-sync-5xrmx\" (UID: \"921fb66d-0be5-4614-9974-86da117973d1\") " pod="openstack/barbican-db-sync-5xrmx" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.290673 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921fb66d-0be5-4614-9974-86da117973d1-combined-ca-bundle\") pod \"barbican-db-sync-5xrmx\" (UID: \"921fb66d-0be5-4614-9974-86da117973d1\") " pod="openstack/barbican-db-sync-5xrmx" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.290707 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-65cfr\" (UID: \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\") " pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.290798 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-65cfr\" (UID: \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\") " pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.290820 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpjnd\" (UniqueName: \"kubernetes.io/projected/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-kube-api-access-cpjnd\") pod \"ceilometer-0\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.295155 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/921fb66d-0be5-4614-9974-86da117973d1-db-sync-config-data\") pod \"barbican-db-sync-5xrmx\" (UID: \"921fb66d-0be5-4614-9974-86da117973d1\") " pod="openstack/barbican-db-sync-5xrmx" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.308381 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-65cfr"] Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.310990 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921fb66d-0be5-4614-9974-86da117973d1-combined-ca-bundle\") pod \"barbican-db-sync-5xrmx\" (UID: \"921fb66d-0be5-4614-9974-86da117973d1\") " pod="openstack/barbican-db-sync-5xrmx" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.340598 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdbrt\" (UniqueName: \"kubernetes.io/projected/921fb66d-0be5-4614-9974-86da117973d1-kube-api-access-kdbrt\") pod \"barbican-db-sync-5xrmx\" (UID: \"921fb66d-0be5-4614-9974-86da117973d1\") " pod="openstack/barbican-db-sync-5xrmx" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.376760 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-jhfnf"] Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.379442 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jhfnf" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.382347 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.382744 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.383278 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-xgb7h" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.392352 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpp94\" (UniqueName: \"kubernetes.io/projected/74d7118a-77ce-4f65-b0c3-a28c70623d2d-kube-api-access-lpp94\") pod \"dnsmasq-dns-7987f74bbc-65cfr\" (UID: \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\") " pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.392421 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-run-httpd\") pod \"ceilometer-0\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.392445 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-config-data\") pod \"ceilometer-0\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.392464 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.392483 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-config\") pod \"dnsmasq-dns-7987f74bbc-65cfr\" (UID: \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\") " pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.392506 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.392523 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-65cfr\" (UID: \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\") " pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.392540 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-scripts\") pod \"ceilometer-0\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.392556 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-log-httpd\") pod \"ceilometer-0\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.392585 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-65cfr\" (UID: \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\") " pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.392612 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-65cfr\" (UID: \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\") " pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.392631 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpjnd\" (UniqueName: \"kubernetes.io/projected/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-kube-api-access-cpjnd\") pod \"ceilometer-0\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.394781 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-log-httpd\") pod \"ceilometer-0\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.395466 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-65cfr\" (UID: \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\") " pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.395505 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-65cfr\" (UID: \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\") " pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.396118 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-65cfr\" (UID: \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\") " pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.397084 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-config\") pod \"dnsmasq-dns-7987f74bbc-65cfr\" (UID: \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\") " pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.397138 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jhfnf"] Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.397456 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-run-httpd\") pod \"ceilometer-0\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.403778 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-scripts\") pod \"ceilometer-0\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.417689 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-config-data\") pod \"ceilometer-0\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.419080 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.425700 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.425818 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpjnd\" (UniqueName: \"kubernetes.io/projected/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-kube-api-access-cpjnd\") pod \"ceilometer-0\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.427131 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpp94\" (UniqueName: \"kubernetes.io/projected/74d7118a-77ce-4f65-b0c3-a28c70623d2d-kube-api-access-lpp94\") pod \"dnsmasq-dns-7987f74bbc-65cfr\" (UID: \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\") " pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.489590 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.494583 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db80e5df-1238-46c1-b573-55fb8797e379-scripts\") pod \"placement-db-sync-jhfnf\" (UID: \"db80e5df-1238-46c1-b573-55fb8797e379\") " pod="openstack/placement-db-sync-jhfnf" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.494634 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db80e5df-1238-46c1-b573-55fb8797e379-combined-ca-bundle\") pod \"placement-db-sync-jhfnf\" (UID: \"db80e5df-1238-46c1-b573-55fb8797e379\") " pod="openstack/placement-db-sync-jhfnf" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.494715 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db80e5df-1238-46c1-b573-55fb8797e379-config-data\") pod \"placement-db-sync-jhfnf\" (UID: \"db80e5df-1238-46c1-b573-55fb8797e379\") " pod="openstack/placement-db-sync-jhfnf" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.494748 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j65f9\" (UniqueName: \"kubernetes.io/projected/db80e5df-1238-46c1-b573-55fb8797e379-kube-api-access-j65f9\") pod \"placement-db-sync-jhfnf\" (UID: \"db80e5df-1238-46c1-b573-55fb8797e379\") " pod="openstack/placement-db-sync-jhfnf" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.494778 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db80e5df-1238-46c1-b573-55fb8797e379-logs\") pod \"placement-db-sync-jhfnf\" (UID: \"db80e5df-1238-46c1-b573-55fb8797e379\") " pod="openstack/placement-db-sync-jhfnf" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.504231 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5xrmx" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.517670 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.543478 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.599019 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db80e5df-1238-46c1-b573-55fb8797e379-config-data\") pod \"placement-db-sync-jhfnf\" (UID: \"db80e5df-1238-46c1-b573-55fb8797e379\") " pod="openstack/placement-db-sync-jhfnf" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.599406 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j65f9\" (UniqueName: \"kubernetes.io/projected/db80e5df-1238-46c1-b573-55fb8797e379-kube-api-access-j65f9\") pod \"placement-db-sync-jhfnf\" (UID: \"db80e5df-1238-46c1-b573-55fb8797e379\") " pod="openstack/placement-db-sync-jhfnf" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.599478 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db80e5df-1238-46c1-b573-55fb8797e379-logs\") pod \"placement-db-sync-jhfnf\" (UID: \"db80e5df-1238-46c1-b573-55fb8797e379\") " pod="openstack/placement-db-sync-jhfnf" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.599569 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db80e5df-1238-46c1-b573-55fb8797e379-scripts\") pod \"placement-db-sync-jhfnf\" (UID: \"db80e5df-1238-46c1-b573-55fb8797e379\") " pod="openstack/placement-db-sync-jhfnf" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.599630 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db80e5df-1238-46c1-b573-55fb8797e379-combined-ca-bundle\") pod \"placement-db-sync-jhfnf\" (UID: \"db80e5df-1238-46c1-b573-55fb8797e379\") " pod="openstack/placement-db-sync-jhfnf" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.600299 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db80e5df-1238-46c1-b573-55fb8797e379-logs\") pod \"placement-db-sync-jhfnf\" (UID: \"db80e5df-1238-46c1-b573-55fb8797e379\") " pod="openstack/placement-db-sync-jhfnf" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.607740 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db80e5df-1238-46c1-b573-55fb8797e379-combined-ca-bundle\") pod \"placement-db-sync-jhfnf\" (UID: \"db80e5df-1238-46c1-b573-55fb8797e379\") " pod="openstack/placement-db-sync-jhfnf" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.608560 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db80e5df-1238-46c1-b573-55fb8797e379-config-data\") pod \"placement-db-sync-jhfnf\" (UID: \"db80e5df-1238-46c1-b573-55fb8797e379\") " pod="openstack/placement-db-sync-jhfnf" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.609013 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db80e5df-1238-46c1-b573-55fb8797e379-scripts\") pod \"placement-db-sync-jhfnf\" (UID: \"db80e5df-1238-46c1-b573-55fb8797e379\") " pod="openstack/placement-db-sync-jhfnf" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.631204 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j65f9\" (UniqueName: \"kubernetes.io/projected/db80e5df-1238-46c1-b573-55fb8797e379-kube-api-access-j65f9\") pod \"placement-db-sync-jhfnf\" (UID: \"db80e5df-1238-46c1-b573-55fb8797e379\") " pod="openstack/placement-db-sync-jhfnf" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.708582 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-hq5pm"] Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.723250 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jhfnf" Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.834294 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5pz6x"] Jan 31 04:05:06 crc kubenswrapper[4827]: W0131 04:05:06.848441 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77bd3d45_e656_4e13_94c5_1ef4bb703af6.slice/crio-7acf9ca7c79c5c1ab4d1e42bcc919ecb671c71ae869448809525c7aaa842efd5 WatchSource:0}: Error finding container 7acf9ca7c79c5c1ab4d1e42bcc919ecb671c71ae869448809525c7aaa842efd5: Status 404 returned error can't find the container with id 7acf9ca7c79c5c1ab4d1e42bcc919ecb671c71ae869448809525c7aaa842efd5 Jan 31 04:05:06 crc kubenswrapper[4827]: I0131 04:05:06.957712 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kk8q2"] Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.079728 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-5xrmx"] Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.427170 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.444724 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" podUID="e20183b9-5cab-49ce-96b1-fa763a4a4d60" containerName="init" containerID="cri-o://261919767beade77141c517e60494c48ccfe6e94b2c636df3a73e7bd036d77a3" gracePeriod=10 Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.444811 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" event={"ID":"e20183b9-5cab-49ce-96b1-fa763a4a4d60","Type":"ContainerStarted","Data":"261919767beade77141c517e60494c48ccfe6e94b2c636df3a73e7bd036d77a3"} Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.444835 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" event={"ID":"e20183b9-5cab-49ce-96b1-fa763a4a4d60","Type":"ContainerStarted","Data":"345ddebf5da51e17ea35912a89a5be668a4eeced7d027547556258065634f629"} Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.465829 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5pz6x" event={"ID":"77bd3d45-e656-4e13-94c5-1ef4bb703af6","Type":"ContainerStarted","Data":"874eef342144c57c19df2425fb3922dd25e57f3cb0fe1328ddf7e019117b8980"} Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.465906 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5pz6x" event={"ID":"77bd3d45-e656-4e13-94c5-1ef4bb703af6","Type":"ContainerStarted","Data":"7acf9ca7c79c5c1ab4d1e42bcc919ecb671c71ae869448809525c7aaa842efd5"} Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.471260 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kk8q2" event={"ID":"3034593d-68df-4223-a3d5-f1cd46f49398","Type":"ContainerStarted","Data":"d390ad5a10b372f9cc082556b2f8259ab04be35df00eb5e029867f257a58ded2"} Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.471297 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kk8q2" event={"ID":"3034593d-68df-4223-a3d5-f1cd46f49398","Type":"ContainerStarted","Data":"2a920f52bc5b6ce2bb92b528e5207cb59c4a8479b02e579746db4213c4920260"} Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.474793 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5xrmx" event={"ID":"921fb66d-0be5-4614-9974-86da117973d1","Type":"ContainerStarted","Data":"14944a3664be1f9fc189d089569f318c910588eef0d95acf4f8ba492635c3fb9"} Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.510499 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-5pz6x" podStartSLOduration=2.510481455 podStartE2EDuration="2.510481455s" podCreationTimestamp="2026-01-31 04:05:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:07.484994903 +0000 UTC m=+1100.172075352" watchObservedRunningTime="2026-01-31 04:05:07.510481455 +0000 UTC m=+1100.197561904" Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.512903 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-kk8q2" podStartSLOduration=2.512874288 podStartE2EDuration="2.512874288s" podCreationTimestamp="2026-01-31 04:05:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:07.507512794 +0000 UTC m=+1100.194593243" watchObservedRunningTime="2026-01-31 04:05:07.512874288 +0000 UTC m=+1100.199954737" Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.538532 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-48bfh"] Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.551505 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jhfnf"] Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.558105 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-65cfr"] Jan 31 04:05:07 crc kubenswrapper[4827]: W0131 04:05:07.562740 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb80e5df_1238_46c1_b573_55fb8797e379.slice/crio-c2294539dce915468e15cbf810f022e8c94b9e35d8a26d276743f15d3900b9df WatchSource:0}: Error finding container c2294539dce915468e15cbf810f022e8c94b9e35d8a26d276743f15d3900b9df: Status 404 returned error can't find the container with id c2294539dce915468e15cbf810f022e8c94b9e35d8a26d276743f15d3900b9df Jan 31 04:05:07 crc kubenswrapper[4827]: W0131 04:05:07.564851 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74d7118a_77ce_4f65_b0c3_a28c70623d2d.slice/crio-e715ce476b90f5e5b4fe718f38756dccd4f8dbe738055092b4b370d17863b22a WatchSource:0}: Error finding container e715ce476b90f5e5b4fe718f38756dccd4f8dbe738055092b4b370d17863b22a: Status 404 returned error can't find the container with id e715ce476b90f5e5b4fe718f38756dccd4f8dbe738055092b4b370d17863b22a Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.861048 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.939900 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-dns-svc\") pod \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\" (UID: \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\") " Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.940306 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-config\") pod \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\" (UID: \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\") " Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.940346 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-ovsdbserver-nb\") pod \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\" (UID: \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\") " Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.940371 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxvzh\" (UniqueName: \"kubernetes.io/projected/e20183b9-5cab-49ce-96b1-fa763a4a4d60-kube-api-access-xxvzh\") pod \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\" (UID: \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\") " Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.940429 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-ovsdbserver-sb\") pod \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\" (UID: \"e20183b9-5cab-49ce-96b1-fa763a4a4d60\") " Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.946339 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e20183b9-5cab-49ce-96b1-fa763a4a4d60-kube-api-access-xxvzh" (OuterVolumeSpecName: "kube-api-access-xxvzh") pod "e20183b9-5cab-49ce-96b1-fa763a4a4d60" (UID: "e20183b9-5cab-49ce-96b1-fa763a4a4d60"). InnerVolumeSpecName "kube-api-access-xxvzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.983303 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e20183b9-5cab-49ce-96b1-fa763a4a4d60" (UID: "e20183b9-5cab-49ce-96b1-fa763a4a4d60"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.988121 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-config" (OuterVolumeSpecName: "config") pod "e20183b9-5cab-49ce-96b1-fa763a4a4d60" (UID: "e20183b9-5cab-49ce-96b1-fa763a4a4d60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.990735 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e20183b9-5cab-49ce-96b1-fa763a4a4d60" (UID: "e20183b9-5cab-49ce-96b1-fa763a4a4d60"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.991093 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:05:07 crc kubenswrapper[4827]: I0131 04:05:07.991403 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e20183b9-5cab-49ce-96b1-fa763a4a4d60" (UID: "e20183b9-5cab-49ce-96b1-fa763a4a4d60"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:08 crc kubenswrapper[4827]: I0131 04:05:08.042498 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:08 crc kubenswrapper[4827]: I0131 04:05:08.042545 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:08 crc kubenswrapper[4827]: I0131 04:05:08.042560 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:08 crc kubenswrapper[4827]: I0131 04:05:08.042574 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxvzh\" (UniqueName: \"kubernetes.io/projected/e20183b9-5cab-49ce-96b1-fa763a4a4d60-kube-api-access-xxvzh\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:08 crc kubenswrapper[4827]: I0131 04:05:08.042585 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e20183b9-5cab-49ce-96b1-fa763a4a4d60-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:08 crc kubenswrapper[4827]: I0131 04:05:08.492180 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d07cd025-0fc1-4536-978d-f7f7b0df2dbc","Type":"ContainerStarted","Data":"467546bd7d9d3236ff27d5484abaaa397708754379e3802460b1310ac453d4a3"} Jan 31 04:05:08 crc kubenswrapper[4827]: I0131 04:05:08.501039 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-48bfh" event={"ID":"3da5eeb9-641c-4b43-a3c9-eb4860e9995b","Type":"ContainerStarted","Data":"105cd57e15ec62017be0934c9f9f65972db769886e58c58798d5380b0e092bac"} Jan 31 04:05:08 crc kubenswrapper[4827]: I0131 04:05:08.510611 4827 generic.go:334] "Generic (PLEG): container finished" podID="e20183b9-5cab-49ce-96b1-fa763a4a4d60" containerID="261919767beade77141c517e60494c48ccfe6e94b2c636df3a73e7bd036d77a3" exitCode=0 Jan 31 04:05:08 crc kubenswrapper[4827]: I0131 04:05:08.510758 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" Jan 31 04:05:08 crc kubenswrapper[4827]: I0131 04:05:08.510793 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" event={"ID":"e20183b9-5cab-49ce-96b1-fa763a4a4d60","Type":"ContainerDied","Data":"261919767beade77141c517e60494c48ccfe6e94b2c636df3a73e7bd036d77a3"} Jan 31 04:05:08 crc kubenswrapper[4827]: I0131 04:05:08.511318 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-hq5pm" event={"ID":"e20183b9-5cab-49ce-96b1-fa763a4a4d60","Type":"ContainerDied","Data":"345ddebf5da51e17ea35912a89a5be668a4eeced7d027547556258065634f629"} Jan 31 04:05:08 crc kubenswrapper[4827]: I0131 04:05:08.511361 4827 scope.go:117] "RemoveContainer" containerID="261919767beade77141c517e60494c48ccfe6e94b2c636df3a73e7bd036d77a3" Jan 31 04:05:08 crc kubenswrapper[4827]: I0131 04:05:08.513514 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jhfnf" event={"ID":"db80e5df-1238-46c1-b573-55fb8797e379","Type":"ContainerStarted","Data":"c2294539dce915468e15cbf810f022e8c94b9e35d8a26d276743f15d3900b9df"} Jan 31 04:05:08 crc kubenswrapper[4827]: I0131 04:05:08.519001 4827 generic.go:334] "Generic (PLEG): container finished" podID="74d7118a-77ce-4f65-b0c3-a28c70623d2d" containerID="fb804edbf75f7ddc3b818870c1b27232c32bff224ffdb18124af18328e0bc8a6" exitCode=0 Jan 31 04:05:08 crc kubenswrapper[4827]: I0131 04:05:08.522181 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" event={"ID":"74d7118a-77ce-4f65-b0c3-a28c70623d2d","Type":"ContainerDied","Data":"fb804edbf75f7ddc3b818870c1b27232c32bff224ffdb18124af18328e0bc8a6"} Jan 31 04:05:08 crc kubenswrapper[4827]: I0131 04:05:08.522254 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" event={"ID":"74d7118a-77ce-4f65-b0c3-a28c70623d2d","Type":"ContainerStarted","Data":"e715ce476b90f5e5b4fe718f38756dccd4f8dbe738055092b4b370d17863b22a"} Jan 31 04:05:08 crc kubenswrapper[4827]: I0131 04:05:08.609728 4827 scope.go:117] "RemoveContainer" containerID="261919767beade77141c517e60494c48ccfe6e94b2c636df3a73e7bd036d77a3" Jan 31 04:05:08 crc kubenswrapper[4827]: I0131 04:05:08.610200 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-hq5pm"] Jan 31 04:05:08 crc kubenswrapper[4827]: E0131 04:05:08.610908 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"261919767beade77141c517e60494c48ccfe6e94b2c636df3a73e7bd036d77a3\": container with ID starting with 261919767beade77141c517e60494c48ccfe6e94b2c636df3a73e7bd036d77a3 not found: ID does not exist" containerID="261919767beade77141c517e60494c48ccfe6e94b2c636df3a73e7bd036d77a3" Jan 31 04:05:08 crc kubenswrapper[4827]: I0131 04:05:08.610936 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"261919767beade77141c517e60494c48ccfe6e94b2c636df3a73e7bd036d77a3"} err="failed to get container status \"261919767beade77141c517e60494c48ccfe6e94b2c636df3a73e7bd036d77a3\": rpc error: code = NotFound desc = could not find container \"261919767beade77141c517e60494c48ccfe6e94b2c636df3a73e7bd036d77a3\": container with ID starting with 261919767beade77141c517e60494c48ccfe6e94b2c636df3a73e7bd036d77a3 not found: ID does not exist" Jan 31 04:05:08 crc kubenswrapper[4827]: I0131 04:05:08.619557 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-hq5pm"] Jan 31 04:05:09 crc kubenswrapper[4827]: I0131 04:05:09.538449 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" event={"ID":"74d7118a-77ce-4f65-b0c3-a28c70623d2d","Type":"ContainerStarted","Data":"a9055da5d3c5e8ea71781f6a31c48b3595dd4f88675097e31a1a7067a4751cac"} Jan 31 04:05:09 crc kubenswrapper[4827]: I0131 04:05:09.539190 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" Jan 31 04:05:09 crc kubenswrapper[4827]: I0131 04:05:09.571300 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" podStartSLOduration=3.57126655 podStartE2EDuration="3.57126655s" podCreationTimestamp="2026-01-31 04:05:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:09.564398809 +0000 UTC m=+1102.251479278" watchObservedRunningTime="2026-01-31 04:05:09.57126655 +0000 UTC m=+1102.258346999" Jan 31 04:05:10 crc kubenswrapper[4827]: I0131 04:05:10.123474 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e20183b9-5cab-49ce-96b1-fa763a4a4d60" path="/var/lib/kubelet/pods/e20183b9-5cab-49ce-96b1-fa763a4a4d60/volumes" Jan 31 04:05:11 crc kubenswrapper[4827]: I0131 04:05:11.561621 4827 generic.go:334] "Generic (PLEG): container finished" podID="77bd3d45-e656-4e13-94c5-1ef4bb703af6" containerID="874eef342144c57c19df2425fb3922dd25e57f3cb0fe1328ddf7e019117b8980" exitCode=0 Jan 31 04:05:11 crc kubenswrapper[4827]: I0131 04:05:11.561700 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5pz6x" event={"ID":"77bd3d45-e656-4e13-94c5-1ef4bb703af6","Type":"ContainerDied","Data":"874eef342144c57c19df2425fb3922dd25e57f3cb0fe1328ddf7e019117b8980"} Jan 31 04:05:12 crc kubenswrapper[4827]: I0131 04:05:12.969681 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.048198 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-credential-keys\") pod \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.048295 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhgpf\" (UniqueName: \"kubernetes.io/projected/77bd3d45-e656-4e13-94c5-1ef4bb703af6-kube-api-access-fhgpf\") pod \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.048537 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-fernet-keys\") pod \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.048585 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-config-data\") pod \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.048642 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-combined-ca-bundle\") pod \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.048736 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-scripts\") pod \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\" (UID: \"77bd3d45-e656-4e13-94c5-1ef4bb703af6\") " Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.054894 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "77bd3d45-e656-4e13-94c5-1ef4bb703af6" (UID: "77bd3d45-e656-4e13-94c5-1ef4bb703af6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.060072 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-scripts" (OuterVolumeSpecName: "scripts") pod "77bd3d45-e656-4e13-94c5-1ef4bb703af6" (UID: "77bd3d45-e656-4e13-94c5-1ef4bb703af6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.060109 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77bd3d45-e656-4e13-94c5-1ef4bb703af6-kube-api-access-fhgpf" (OuterVolumeSpecName: "kube-api-access-fhgpf") pod "77bd3d45-e656-4e13-94c5-1ef4bb703af6" (UID: "77bd3d45-e656-4e13-94c5-1ef4bb703af6"). InnerVolumeSpecName "kube-api-access-fhgpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.060921 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "77bd3d45-e656-4e13-94c5-1ef4bb703af6" (UID: "77bd3d45-e656-4e13-94c5-1ef4bb703af6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.071630 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-config-data" (OuterVolumeSpecName: "config-data") pod "77bd3d45-e656-4e13-94c5-1ef4bb703af6" (UID: "77bd3d45-e656-4e13-94c5-1ef4bb703af6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.085261 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77bd3d45-e656-4e13-94c5-1ef4bb703af6" (UID: "77bd3d45-e656-4e13-94c5-1ef4bb703af6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.150602 4827 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.150921 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhgpf\" (UniqueName: \"kubernetes.io/projected/77bd3d45-e656-4e13-94c5-1ef4bb703af6-kube-api-access-fhgpf\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.150935 4827 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.150942 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.150951 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.150959 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77bd3d45-e656-4e13-94c5-1ef4bb703af6-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.577312 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5pz6x" event={"ID":"77bd3d45-e656-4e13-94c5-1ef4bb703af6","Type":"ContainerDied","Data":"7acf9ca7c79c5c1ab4d1e42bcc919ecb671c71ae869448809525c7aaa842efd5"} Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.577349 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7acf9ca7c79c5c1ab4d1e42bcc919ecb671c71ae869448809525c7aaa842efd5" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.577355 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5pz6x" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.757374 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-5pz6x"] Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.764259 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-5pz6x"] Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.847792 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-d86jl"] Jan 31 04:05:13 crc kubenswrapper[4827]: E0131 04:05:13.848118 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77bd3d45-e656-4e13-94c5-1ef4bb703af6" containerName="keystone-bootstrap" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.848130 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="77bd3d45-e656-4e13-94c5-1ef4bb703af6" containerName="keystone-bootstrap" Jan 31 04:05:13 crc kubenswrapper[4827]: E0131 04:05:13.848161 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e20183b9-5cab-49ce-96b1-fa763a4a4d60" containerName="init" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.848167 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e20183b9-5cab-49ce-96b1-fa763a4a4d60" containerName="init" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.848324 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e20183b9-5cab-49ce-96b1-fa763a4a4d60" containerName="init" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.848340 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="77bd3d45-e656-4e13-94c5-1ef4bb703af6" containerName="keystone-bootstrap" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.848853 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.852249 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.852448 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.852487 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6bbzt" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.852585 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.852703 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.872978 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-d86jl"] Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.962936 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-config-data\") pod \"keystone-bootstrap-d86jl\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.962979 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfttb\" (UniqueName: \"kubernetes.io/projected/2653aa61-3396-42b4-8cfe-ae977242f427-kube-api-access-lfttb\") pod \"keystone-bootstrap-d86jl\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.963190 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-fernet-keys\") pod \"keystone-bootstrap-d86jl\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.963238 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-credential-keys\") pod \"keystone-bootstrap-d86jl\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.963341 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-scripts\") pod \"keystone-bootstrap-d86jl\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:13 crc kubenswrapper[4827]: I0131 04:05:13.963539 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-combined-ca-bundle\") pod \"keystone-bootstrap-d86jl\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:14 crc kubenswrapper[4827]: I0131 04:05:14.064851 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-scripts\") pod \"keystone-bootstrap-d86jl\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:14 crc kubenswrapper[4827]: I0131 04:05:14.064963 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-combined-ca-bundle\") pod \"keystone-bootstrap-d86jl\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:14 crc kubenswrapper[4827]: I0131 04:05:14.064995 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-config-data\") pod \"keystone-bootstrap-d86jl\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:14 crc kubenswrapper[4827]: I0131 04:05:14.065010 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfttb\" (UniqueName: \"kubernetes.io/projected/2653aa61-3396-42b4-8cfe-ae977242f427-kube-api-access-lfttb\") pod \"keystone-bootstrap-d86jl\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:14 crc kubenswrapper[4827]: I0131 04:05:14.065052 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-fernet-keys\") pod \"keystone-bootstrap-d86jl\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:14 crc kubenswrapper[4827]: I0131 04:05:14.065070 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-credential-keys\") pod \"keystone-bootstrap-d86jl\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:14 crc kubenswrapper[4827]: I0131 04:05:14.069545 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-credential-keys\") pod \"keystone-bootstrap-d86jl\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:14 crc kubenswrapper[4827]: I0131 04:05:14.069770 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-scripts\") pod \"keystone-bootstrap-d86jl\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:14 crc kubenswrapper[4827]: I0131 04:05:14.076393 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-combined-ca-bundle\") pod \"keystone-bootstrap-d86jl\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:14 crc kubenswrapper[4827]: I0131 04:05:14.080763 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-fernet-keys\") pod \"keystone-bootstrap-d86jl\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:14 crc kubenswrapper[4827]: I0131 04:05:14.083404 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfttb\" (UniqueName: \"kubernetes.io/projected/2653aa61-3396-42b4-8cfe-ae977242f427-kube-api-access-lfttb\") pod \"keystone-bootstrap-d86jl\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:14 crc kubenswrapper[4827]: I0131 04:05:14.094717 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-config-data\") pod \"keystone-bootstrap-d86jl\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:14 crc kubenswrapper[4827]: I0131 04:05:14.147301 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77bd3d45-e656-4e13-94c5-1ef4bb703af6" path="/var/lib/kubelet/pods/77bd3d45-e656-4e13-94c5-1ef4bb703af6/volumes" Jan 31 04:05:14 crc kubenswrapper[4827]: I0131 04:05:14.164983 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:16 crc kubenswrapper[4827]: I0131 04:05:16.546330 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" Jan 31 04:05:16 crc kubenswrapper[4827]: I0131 04:05:16.605746 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-jmjqr"] Jan 31 04:05:16 crc kubenswrapper[4827]: I0131 04:05:16.606020 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" podUID="c5b9b010-7f89-4783-8dbe-d99a70ed06dc" containerName="dnsmasq-dns" containerID="cri-o://d27d8c56ccddb6fefc46a276a6c005c968705931bad7da365a2488c039067002" gracePeriod=10 Jan 31 04:05:17 crc kubenswrapper[4827]: I0131 04:05:17.371060 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:05:17 crc kubenswrapper[4827]: I0131 04:05:17.371136 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:05:17 crc kubenswrapper[4827]: I0131 04:05:17.621582 4827 generic.go:334] "Generic (PLEG): container finished" podID="c5b9b010-7f89-4783-8dbe-d99a70ed06dc" containerID="d27d8c56ccddb6fefc46a276a6c005c968705931bad7da365a2488c039067002" exitCode=0 Jan 31 04:05:17 crc kubenswrapper[4827]: I0131 04:05:17.621670 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" event={"ID":"c5b9b010-7f89-4783-8dbe-d99a70ed06dc","Type":"ContainerDied","Data":"d27d8c56ccddb6fefc46a276a6c005c968705931bad7da365a2488c039067002"} Jan 31 04:05:17 crc kubenswrapper[4827]: I0131 04:05:17.880491 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" podUID="c5b9b010-7f89-4783-8dbe-d99a70ed06dc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Jan 31 04:05:27 crc kubenswrapper[4827]: E0131 04:05:27.596279 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 31 04:05:27 crc kubenswrapper[4827]: E0131 04:05:27.597120 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv7s2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-48bfh_openstack(3da5eeb9-641c-4b43-a3c9-eb4860e9995b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:05:27 crc kubenswrapper[4827]: E0131 04:05:27.598276 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-48bfh" podUID="3da5eeb9-641c-4b43-a3c9-eb4860e9995b" Jan 31 04:05:27 crc kubenswrapper[4827]: I0131 04:05:27.709695 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" event={"ID":"c5b9b010-7f89-4783-8dbe-d99a70ed06dc","Type":"ContainerDied","Data":"5b61536ac0037eb444f91d17aef243de79df784bb1cc5bd93c45edcf77898664"} Jan 31 04:05:27 crc kubenswrapper[4827]: I0131 04:05:27.710014 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b61536ac0037eb444f91d17aef243de79df784bb1cc5bd93c45edcf77898664" Jan 31 04:05:27 crc kubenswrapper[4827]: E0131 04:05:27.710976 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-48bfh" podUID="3da5eeb9-641c-4b43-a3c9-eb4860e9995b" Jan 31 04:05:27 crc kubenswrapper[4827]: I0131 04:05:27.851547 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" Jan 31 04:05:27 crc kubenswrapper[4827]: I0131 04:05:27.881506 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" podUID="c5b9b010-7f89-4783-8dbe-d99a70ed06dc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: i/o timeout" Jan 31 04:05:27 crc kubenswrapper[4827]: I0131 04:05:27.913346 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-config\") pod \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\" (UID: \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\") " Jan 31 04:05:27 crc kubenswrapper[4827]: I0131 04:05:27.913406 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-ovsdbserver-sb\") pod \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\" (UID: \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\") " Jan 31 04:05:27 crc kubenswrapper[4827]: I0131 04:05:27.913455 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4tkp\" (UniqueName: \"kubernetes.io/projected/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-kube-api-access-z4tkp\") pod \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\" (UID: \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\") " Jan 31 04:05:27 crc kubenswrapper[4827]: I0131 04:05:27.913485 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-ovsdbserver-nb\") pod \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\" (UID: \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\") " Jan 31 04:05:27 crc kubenswrapper[4827]: I0131 04:05:27.913619 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-dns-svc\") pod \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\" (UID: \"c5b9b010-7f89-4783-8dbe-d99a70ed06dc\") " Jan 31 04:05:27 crc kubenswrapper[4827]: I0131 04:05:27.927297 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-kube-api-access-z4tkp" (OuterVolumeSpecName: "kube-api-access-z4tkp") pod "c5b9b010-7f89-4783-8dbe-d99a70ed06dc" (UID: "c5b9b010-7f89-4783-8dbe-d99a70ed06dc"). InnerVolumeSpecName "kube-api-access-z4tkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:27 crc kubenswrapper[4827]: I0131 04:05:27.973417 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-d86jl"] Jan 31 04:05:28 crc kubenswrapper[4827]: I0131 04:05:28.019990 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4tkp\" (UniqueName: \"kubernetes.io/projected/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-kube-api-access-z4tkp\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:28 crc kubenswrapper[4827]: I0131 04:05:28.030199 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c5b9b010-7f89-4783-8dbe-d99a70ed06dc" (UID: "c5b9b010-7f89-4783-8dbe-d99a70ed06dc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:28 crc kubenswrapper[4827]: I0131 04:05:28.033918 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c5b9b010-7f89-4783-8dbe-d99a70ed06dc" (UID: "c5b9b010-7f89-4783-8dbe-d99a70ed06dc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:28 crc kubenswrapper[4827]: I0131 04:05:28.040922 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c5b9b010-7f89-4783-8dbe-d99a70ed06dc" (UID: "c5b9b010-7f89-4783-8dbe-d99a70ed06dc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:28 crc kubenswrapper[4827]: I0131 04:05:28.065052 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-config" (OuterVolumeSpecName: "config") pod "c5b9b010-7f89-4783-8dbe-d99a70ed06dc" (UID: "c5b9b010-7f89-4783-8dbe-d99a70ed06dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:28 crc kubenswrapper[4827]: I0131 04:05:28.121523 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:28 crc kubenswrapper[4827]: I0131 04:05:28.121564 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:28 crc kubenswrapper[4827]: I0131 04:05:28.121574 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:28 crc kubenswrapper[4827]: I0131 04:05:28.121584 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5b9b010-7f89-4783-8dbe-d99a70ed06dc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:28 crc kubenswrapper[4827]: I0131 04:05:28.722188 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jhfnf" event={"ID":"db80e5df-1238-46c1-b573-55fb8797e379","Type":"ContainerStarted","Data":"f9a3171acb33e15afe734c39f19356645ccfae017fabd868e7280dd5a9ee2e18"} Jan 31 04:05:28 crc kubenswrapper[4827]: I0131 04:05:28.726802 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d07cd025-0fc1-4536-978d-f7f7b0df2dbc","Type":"ContainerStarted","Data":"b988a2b47d57cf27a43165062e4e9bbf55bf7fd3c3576b318ac3691a779daa83"} Jan 31 04:05:28 crc kubenswrapper[4827]: I0131 04:05:28.730311 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5xrmx" event={"ID":"921fb66d-0be5-4614-9974-86da117973d1","Type":"ContainerStarted","Data":"5d9b8f59594735a3a990baf70267109a72f74d08587b0f19647c41e1f6f64489"} Jan 31 04:05:28 crc kubenswrapper[4827]: I0131 04:05:28.732850 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-jmjqr" Jan 31 04:05:28 crc kubenswrapper[4827]: I0131 04:05:28.733306 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d86jl" event={"ID":"2653aa61-3396-42b4-8cfe-ae977242f427","Type":"ContainerStarted","Data":"2aabf052a649cc78a316179aff19fc9534e6b75802a1af93ea9177f7be484997"} Jan 31 04:05:28 crc kubenswrapper[4827]: I0131 04:05:28.733328 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d86jl" event={"ID":"2653aa61-3396-42b4-8cfe-ae977242f427","Type":"ContainerStarted","Data":"e4f7830d196f1f8b026e73e61134874124ff2bc050bb4116454c2dbcfbf9b57b"} Jan 31 04:05:28 crc kubenswrapper[4827]: I0131 04:05:28.750922 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-jhfnf" podStartSLOduration=2.7751598570000002 podStartE2EDuration="22.750900614s" podCreationTimestamp="2026-01-31 04:05:06 +0000 UTC" firstStartedPulling="2026-01-31 04:05:07.568289078 +0000 UTC m=+1100.255369537" lastFinishedPulling="2026-01-31 04:05:27.544029845 +0000 UTC m=+1120.231110294" observedRunningTime="2026-01-31 04:05:28.739516921 +0000 UTC m=+1121.426597370" watchObservedRunningTime="2026-01-31 04:05:28.750900614 +0000 UTC m=+1121.437981083" Jan 31 04:05:28 crc kubenswrapper[4827]: I0131 04:05:28.756861 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-d86jl" podStartSLOduration=15.756845382 podStartE2EDuration="15.756845382s" podCreationTimestamp="2026-01-31 04:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:28.752819959 +0000 UTC m=+1121.439900418" watchObservedRunningTime="2026-01-31 04:05:28.756845382 +0000 UTC m=+1121.443925831" Jan 31 04:05:28 crc kubenswrapper[4827]: I0131 04:05:28.772564 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-5xrmx" podStartSLOduration=2.371548999 podStartE2EDuration="22.772545839s" podCreationTimestamp="2026-01-31 04:05:06 +0000 UTC" firstStartedPulling="2026-01-31 04:05:07.13268843 +0000 UTC m=+1099.819768879" lastFinishedPulling="2026-01-31 04:05:27.53368526 +0000 UTC m=+1120.220765719" observedRunningTime="2026-01-31 04:05:28.770234584 +0000 UTC m=+1121.457315033" watchObservedRunningTime="2026-01-31 04:05:28.772545839 +0000 UTC m=+1121.459626288" Jan 31 04:05:28 crc kubenswrapper[4827]: I0131 04:05:28.789976 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-jmjqr"] Jan 31 04:05:28 crc kubenswrapper[4827]: I0131 04:05:28.801406 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-jmjqr"] Jan 31 04:05:29 crc kubenswrapper[4827]: I0131 04:05:29.742650 4827 generic.go:334] "Generic (PLEG): container finished" podID="3034593d-68df-4223-a3d5-f1cd46f49398" containerID="d390ad5a10b372f9cc082556b2f8259ab04be35df00eb5e029867f257a58ded2" exitCode=0 Jan 31 04:05:29 crc kubenswrapper[4827]: I0131 04:05:29.742777 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kk8q2" event={"ID":"3034593d-68df-4223-a3d5-f1cd46f49398","Type":"ContainerDied","Data":"d390ad5a10b372f9cc082556b2f8259ab04be35df00eb5e029867f257a58ded2"} Jan 31 04:05:29 crc kubenswrapper[4827]: I0131 04:05:29.755272 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d07cd025-0fc1-4536-978d-f7f7b0df2dbc","Type":"ContainerStarted","Data":"047e9b3a0c8b60a604bcaa97a3f1ceb16772069ff3935941f313f5429b3499bb"} Jan 31 04:05:30 crc kubenswrapper[4827]: I0131 04:05:30.120574 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5b9b010-7f89-4783-8dbe-d99a70ed06dc" path="/var/lib/kubelet/pods/c5b9b010-7f89-4783-8dbe-d99a70ed06dc/volumes" Jan 31 04:05:30 crc kubenswrapper[4827]: I0131 04:05:30.764897 4827 generic.go:334] "Generic (PLEG): container finished" podID="db80e5df-1238-46c1-b573-55fb8797e379" containerID="f9a3171acb33e15afe734c39f19356645ccfae017fabd868e7280dd5a9ee2e18" exitCode=0 Jan 31 04:05:30 crc kubenswrapper[4827]: I0131 04:05:30.764971 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jhfnf" event={"ID":"db80e5df-1238-46c1-b573-55fb8797e379","Type":"ContainerDied","Data":"f9a3171acb33e15afe734c39f19356645ccfae017fabd868e7280dd5a9ee2e18"} Jan 31 04:05:30 crc kubenswrapper[4827]: I0131 04:05:30.770655 4827 generic.go:334] "Generic (PLEG): container finished" podID="921fb66d-0be5-4614-9974-86da117973d1" containerID="5d9b8f59594735a3a990baf70267109a72f74d08587b0f19647c41e1f6f64489" exitCode=0 Jan 31 04:05:30 crc kubenswrapper[4827]: I0131 04:05:30.770822 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5xrmx" event={"ID":"921fb66d-0be5-4614-9974-86da117973d1","Type":"ContainerDied","Data":"5d9b8f59594735a3a990baf70267109a72f74d08587b0f19647c41e1f6f64489"} Jan 31 04:05:31 crc kubenswrapper[4827]: I0131 04:05:31.068354 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kk8q2" Jan 31 04:05:31 crc kubenswrapper[4827]: I0131 04:05:31.164440 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3034593d-68df-4223-a3d5-f1cd46f49398-combined-ca-bundle\") pod \"3034593d-68df-4223-a3d5-f1cd46f49398\" (UID: \"3034593d-68df-4223-a3d5-f1cd46f49398\") " Jan 31 04:05:31 crc kubenswrapper[4827]: I0131 04:05:31.164512 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3034593d-68df-4223-a3d5-f1cd46f49398-config\") pod \"3034593d-68df-4223-a3d5-f1cd46f49398\" (UID: \"3034593d-68df-4223-a3d5-f1cd46f49398\") " Jan 31 04:05:31 crc kubenswrapper[4827]: I0131 04:05:31.164709 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd277\" (UniqueName: \"kubernetes.io/projected/3034593d-68df-4223-a3d5-f1cd46f49398-kube-api-access-nd277\") pod \"3034593d-68df-4223-a3d5-f1cd46f49398\" (UID: \"3034593d-68df-4223-a3d5-f1cd46f49398\") " Jan 31 04:05:31 crc kubenswrapper[4827]: I0131 04:05:31.172587 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3034593d-68df-4223-a3d5-f1cd46f49398-kube-api-access-nd277" (OuterVolumeSpecName: "kube-api-access-nd277") pod "3034593d-68df-4223-a3d5-f1cd46f49398" (UID: "3034593d-68df-4223-a3d5-f1cd46f49398"). InnerVolumeSpecName "kube-api-access-nd277". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:31 crc kubenswrapper[4827]: I0131 04:05:31.194233 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3034593d-68df-4223-a3d5-f1cd46f49398-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3034593d-68df-4223-a3d5-f1cd46f49398" (UID: "3034593d-68df-4223-a3d5-f1cd46f49398"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:31 crc kubenswrapper[4827]: I0131 04:05:31.194280 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3034593d-68df-4223-a3d5-f1cd46f49398-config" (OuterVolumeSpecName: "config") pod "3034593d-68df-4223-a3d5-f1cd46f49398" (UID: "3034593d-68df-4223-a3d5-f1cd46f49398"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:31 crc kubenswrapper[4827]: I0131 04:05:31.268938 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3034593d-68df-4223-a3d5-f1cd46f49398-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:31 crc kubenswrapper[4827]: I0131 04:05:31.268971 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd277\" (UniqueName: \"kubernetes.io/projected/3034593d-68df-4223-a3d5-f1cd46f49398-kube-api-access-nd277\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:31 crc kubenswrapper[4827]: I0131 04:05:31.268983 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3034593d-68df-4223-a3d5-f1cd46f49398-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:31 crc kubenswrapper[4827]: I0131 04:05:31.796004 4827 generic.go:334] "Generic (PLEG): container finished" podID="2653aa61-3396-42b4-8cfe-ae977242f427" containerID="2aabf052a649cc78a316179aff19fc9534e6b75802a1af93ea9177f7be484997" exitCode=0 Jan 31 04:05:31 crc kubenswrapper[4827]: I0131 04:05:31.796064 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d86jl" event={"ID":"2653aa61-3396-42b4-8cfe-ae977242f427","Type":"ContainerDied","Data":"2aabf052a649cc78a316179aff19fc9534e6b75802a1af93ea9177f7be484997"} Jan 31 04:05:31 crc kubenswrapper[4827]: I0131 04:05:31.797736 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kk8q2" event={"ID":"3034593d-68df-4223-a3d5-f1cd46f49398","Type":"ContainerDied","Data":"2a920f52bc5b6ce2bb92b528e5207cb59c4a8479b02e579746db4213c4920260"} Jan 31 04:05:31 crc kubenswrapper[4827]: I0131 04:05:31.797767 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a920f52bc5b6ce2bb92b528e5207cb59c4a8479b02e579746db4213c4920260" Jan 31 04:05:31 crc kubenswrapper[4827]: I0131 04:05:31.797947 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kk8q2" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.001058 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-mvkgg"] Jan 31 04:05:32 crc kubenswrapper[4827]: E0131 04:05:32.001653 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5b9b010-7f89-4783-8dbe-d99a70ed06dc" containerName="init" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.001664 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b9b010-7f89-4783-8dbe-d99a70ed06dc" containerName="init" Jan 31 04:05:32 crc kubenswrapper[4827]: E0131 04:05:32.001676 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5b9b010-7f89-4783-8dbe-d99a70ed06dc" containerName="dnsmasq-dns" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.001682 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b9b010-7f89-4783-8dbe-d99a70ed06dc" containerName="dnsmasq-dns" Jan 31 04:05:32 crc kubenswrapper[4827]: E0131 04:05:32.001696 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3034593d-68df-4223-a3d5-f1cd46f49398" containerName="neutron-db-sync" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.001703 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3034593d-68df-4223-a3d5-f1cd46f49398" containerName="neutron-db-sync" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.001849 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5b9b010-7f89-4783-8dbe-d99a70ed06dc" containerName="dnsmasq-dns" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.001865 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="3034593d-68df-4223-a3d5-f1cd46f49398" containerName="neutron-db-sync" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.002651 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.017284 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-mvkgg"] Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.089041 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-config\") pod \"dnsmasq-dns-7b946d459c-mvkgg\" (UID: \"45243f1e-2824-411d-a544-b5d2e8e099f9\") " pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.089090 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-mvkgg\" (UID: \"45243f1e-2824-411d-a544-b5d2e8e099f9\") " pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.089124 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-mvkgg\" (UID: \"45243f1e-2824-411d-a544-b5d2e8e099f9\") " pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.089149 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn7lv\" (UniqueName: \"kubernetes.io/projected/45243f1e-2824-411d-a544-b5d2e8e099f9-kube-api-access-zn7lv\") pod \"dnsmasq-dns-7b946d459c-mvkgg\" (UID: \"45243f1e-2824-411d-a544-b5d2e8e099f9\") " pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.089170 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-dns-svc\") pod \"dnsmasq-dns-7b946d459c-mvkgg\" (UID: \"45243f1e-2824-411d-a544-b5d2e8e099f9\") " pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.144900 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-798dd656d-7874f"] Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.147134 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-798dd656d-7874f"] Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.147225 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-798dd656d-7874f" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.152803 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.153273 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fkcp5" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.153340 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.153449 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.190678 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-config\") pod \"dnsmasq-dns-7b946d459c-mvkgg\" (UID: \"45243f1e-2824-411d-a544-b5d2e8e099f9\") " pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.190723 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-mvkgg\" (UID: \"45243f1e-2824-411d-a544-b5d2e8e099f9\") " pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.190758 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-mvkgg\" (UID: \"45243f1e-2824-411d-a544-b5d2e8e099f9\") " pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.190792 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn7lv\" (UniqueName: \"kubernetes.io/projected/45243f1e-2824-411d-a544-b5d2e8e099f9-kube-api-access-zn7lv\") pod \"dnsmasq-dns-7b946d459c-mvkgg\" (UID: \"45243f1e-2824-411d-a544-b5d2e8e099f9\") " pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.190813 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-dns-svc\") pod \"dnsmasq-dns-7b946d459c-mvkgg\" (UID: \"45243f1e-2824-411d-a544-b5d2e8e099f9\") " pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.192390 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-mvkgg\" (UID: \"45243f1e-2824-411d-a544-b5d2e8e099f9\") " pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.192708 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-dns-svc\") pod \"dnsmasq-dns-7b946d459c-mvkgg\" (UID: \"45243f1e-2824-411d-a544-b5d2e8e099f9\") " pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.192932 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-mvkgg\" (UID: \"45243f1e-2824-411d-a544-b5d2e8e099f9\") " pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.193981 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-config\") pod \"dnsmasq-dns-7b946d459c-mvkgg\" (UID: \"45243f1e-2824-411d-a544-b5d2e8e099f9\") " pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.230667 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn7lv\" (UniqueName: \"kubernetes.io/projected/45243f1e-2824-411d-a544-b5d2e8e099f9-kube-api-access-zn7lv\") pod \"dnsmasq-dns-7b946d459c-mvkgg\" (UID: \"45243f1e-2824-411d-a544-b5d2e8e099f9\") " pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.292566 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-ovndb-tls-certs\") pod \"neutron-798dd656d-7874f\" (UID: \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\") " pod="openstack/neutron-798dd656d-7874f" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.292624 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-httpd-config\") pod \"neutron-798dd656d-7874f\" (UID: \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\") " pod="openstack/neutron-798dd656d-7874f" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.292676 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-config\") pod \"neutron-798dd656d-7874f\" (UID: \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\") " pod="openstack/neutron-798dd656d-7874f" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.292721 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd6vb\" (UniqueName: \"kubernetes.io/projected/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-kube-api-access-zd6vb\") pod \"neutron-798dd656d-7874f\" (UID: \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\") " pod="openstack/neutron-798dd656d-7874f" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.292761 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-combined-ca-bundle\") pod \"neutron-798dd656d-7874f\" (UID: \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\") " pod="openstack/neutron-798dd656d-7874f" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.334625 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.394697 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-ovndb-tls-certs\") pod \"neutron-798dd656d-7874f\" (UID: \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\") " pod="openstack/neutron-798dd656d-7874f" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.394763 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-httpd-config\") pod \"neutron-798dd656d-7874f\" (UID: \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\") " pod="openstack/neutron-798dd656d-7874f" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.394808 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-config\") pod \"neutron-798dd656d-7874f\" (UID: \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\") " pod="openstack/neutron-798dd656d-7874f" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.394843 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd6vb\" (UniqueName: \"kubernetes.io/projected/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-kube-api-access-zd6vb\") pod \"neutron-798dd656d-7874f\" (UID: \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\") " pod="openstack/neutron-798dd656d-7874f" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.394892 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-combined-ca-bundle\") pod \"neutron-798dd656d-7874f\" (UID: \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\") " pod="openstack/neutron-798dd656d-7874f" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.398901 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-combined-ca-bundle\") pod \"neutron-798dd656d-7874f\" (UID: \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\") " pod="openstack/neutron-798dd656d-7874f" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.399935 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-config\") pod \"neutron-798dd656d-7874f\" (UID: \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\") " pod="openstack/neutron-798dd656d-7874f" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.402562 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-ovndb-tls-certs\") pod \"neutron-798dd656d-7874f\" (UID: \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\") " pod="openstack/neutron-798dd656d-7874f" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.411699 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd6vb\" (UniqueName: \"kubernetes.io/projected/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-kube-api-access-zd6vb\") pod \"neutron-798dd656d-7874f\" (UID: \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\") " pod="openstack/neutron-798dd656d-7874f" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.416684 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-httpd-config\") pod \"neutron-798dd656d-7874f\" (UID: \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\") " pod="openstack/neutron-798dd656d-7874f" Jan 31 04:05:32 crc kubenswrapper[4827]: I0131 04:05:32.461954 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-798dd656d-7874f" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.344026 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jhfnf" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.364427 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5xrmx" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.432565 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db80e5df-1238-46c1-b573-55fb8797e379-logs\") pod \"db80e5df-1238-46c1-b573-55fb8797e379\" (UID: \"db80e5df-1238-46c1-b573-55fb8797e379\") " Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.432642 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db80e5df-1238-46c1-b573-55fb8797e379-config-data\") pod \"db80e5df-1238-46c1-b573-55fb8797e379\" (UID: \"db80e5df-1238-46c1-b573-55fb8797e379\") " Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.432675 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/921fb66d-0be5-4614-9974-86da117973d1-db-sync-config-data\") pod \"921fb66d-0be5-4614-9974-86da117973d1\" (UID: \"921fb66d-0be5-4614-9974-86da117973d1\") " Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.432761 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j65f9\" (UniqueName: \"kubernetes.io/projected/db80e5df-1238-46c1-b573-55fb8797e379-kube-api-access-j65f9\") pod \"db80e5df-1238-46c1-b573-55fb8797e379\" (UID: \"db80e5df-1238-46c1-b573-55fb8797e379\") " Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.432778 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921fb66d-0be5-4614-9974-86da117973d1-combined-ca-bundle\") pod \"921fb66d-0be5-4614-9974-86da117973d1\" (UID: \"921fb66d-0be5-4614-9974-86da117973d1\") " Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.432813 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db80e5df-1238-46c1-b573-55fb8797e379-combined-ca-bundle\") pod \"db80e5df-1238-46c1-b573-55fb8797e379\" (UID: \"db80e5df-1238-46c1-b573-55fb8797e379\") " Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.432832 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdbrt\" (UniqueName: \"kubernetes.io/projected/921fb66d-0be5-4614-9974-86da117973d1-kube-api-access-kdbrt\") pod \"921fb66d-0be5-4614-9974-86da117973d1\" (UID: \"921fb66d-0be5-4614-9974-86da117973d1\") " Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.432916 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db80e5df-1238-46c1-b573-55fb8797e379-scripts\") pod \"db80e5df-1238-46c1-b573-55fb8797e379\" (UID: \"db80e5df-1238-46c1-b573-55fb8797e379\") " Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.436326 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db80e5df-1238-46c1-b573-55fb8797e379-logs" (OuterVolumeSpecName: "logs") pod "db80e5df-1238-46c1-b573-55fb8797e379" (UID: "db80e5df-1238-46c1-b573-55fb8797e379"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.444889 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db80e5df-1238-46c1-b573-55fb8797e379-scripts" (OuterVolumeSpecName: "scripts") pod "db80e5df-1238-46c1-b573-55fb8797e379" (UID: "db80e5df-1238-46c1-b573-55fb8797e379"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.444919 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921fb66d-0be5-4614-9974-86da117973d1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "921fb66d-0be5-4614-9974-86da117973d1" (UID: "921fb66d-0be5-4614-9974-86da117973d1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.445058 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/921fb66d-0be5-4614-9974-86da117973d1-kube-api-access-kdbrt" (OuterVolumeSpecName: "kube-api-access-kdbrt") pod "921fb66d-0be5-4614-9974-86da117973d1" (UID: "921fb66d-0be5-4614-9974-86da117973d1"). InnerVolumeSpecName "kube-api-access-kdbrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.448057 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db80e5df-1238-46c1-b573-55fb8797e379-kube-api-access-j65f9" (OuterVolumeSpecName: "kube-api-access-j65f9") pod "db80e5df-1238-46c1-b573-55fb8797e379" (UID: "db80e5df-1238-46c1-b573-55fb8797e379"). InnerVolumeSpecName "kube-api-access-j65f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.469326 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921fb66d-0be5-4614-9974-86da117973d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "921fb66d-0be5-4614-9974-86da117973d1" (UID: "921fb66d-0be5-4614-9974-86da117973d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.470930 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db80e5df-1238-46c1-b573-55fb8797e379-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db80e5df-1238-46c1-b573-55fb8797e379" (UID: "db80e5df-1238-46c1-b573-55fb8797e379"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.474055 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db80e5df-1238-46c1-b573-55fb8797e379-config-data" (OuterVolumeSpecName: "config-data") pod "db80e5df-1238-46c1-b573-55fb8797e379" (UID: "db80e5df-1238-46c1-b573-55fb8797e379"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.534465 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db80e5df-1238-46c1-b573-55fb8797e379-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.534493 4827 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/921fb66d-0be5-4614-9974-86da117973d1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.534510 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j65f9\" (UniqueName: \"kubernetes.io/projected/db80e5df-1238-46c1-b573-55fb8797e379-kube-api-access-j65f9\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.534522 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921fb66d-0be5-4614-9974-86da117973d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.534531 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db80e5df-1238-46c1-b573-55fb8797e379-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.534539 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdbrt\" (UniqueName: \"kubernetes.io/projected/921fb66d-0be5-4614-9974-86da117973d1-kube-api-access-kdbrt\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.534546 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db80e5df-1238-46c1-b573-55fb8797e379-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.534556 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db80e5df-1238-46c1-b573-55fb8797e379-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.816834 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5xrmx" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.816851 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5xrmx" event={"ID":"921fb66d-0be5-4614-9974-86da117973d1","Type":"ContainerDied","Data":"14944a3664be1f9fc189d089569f318c910588eef0d95acf4f8ba492635c3fb9"} Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.816899 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14944a3664be1f9fc189d089569f318c910588eef0d95acf4f8ba492635c3fb9" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.818000 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jhfnf" event={"ID":"db80e5df-1238-46c1-b573-55fb8797e379","Type":"ContainerDied","Data":"c2294539dce915468e15cbf810f022e8c94b9e35d8a26d276743f15d3900b9df"} Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.818016 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2294539dce915468e15cbf810f022e8c94b9e35d8a26d276743f15d3900b9df" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.818038 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jhfnf" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.978213 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-68ddbc68f-gxl56"] Jan 31 04:05:33 crc kubenswrapper[4827]: E0131 04:05:33.978595 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921fb66d-0be5-4614-9974-86da117973d1" containerName="barbican-db-sync" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.978611 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="921fb66d-0be5-4614-9974-86da117973d1" containerName="barbican-db-sync" Jan 31 04:05:33 crc kubenswrapper[4827]: E0131 04:05:33.978639 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db80e5df-1238-46c1-b573-55fb8797e379" containerName="placement-db-sync" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.978647 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="db80e5df-1238-46c1-b573-55fb8797e379" containerName="placement-db-sync" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.978826 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="db80e5df-1238-46c1-b573-55fb8797e379" containerName="placement-db-sync" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.978849 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="921fb66d-0be5-4614-9974-86da117973d1" containerName="barbican-db-sync" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.979797 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.988516 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.988693 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 31 04:05:33 crc kubenswrapper[4827]: I0131 04:05:33.994371 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68ddbc68f-gxl56"] Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.044889 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-ovndb-tls-certs\") pod \"neutron-68ddbc68f-gxl56\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.044959 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-config\") pod \"neutron-68ddbc68f-gxl56\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.044978 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-httpd-config\") pod \"neutron-68ddbc68f-gxl56\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.045096 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbcn5\" (UniqueName: \"kubernetes.io/projected/89663bcc-cc29-44ed-a65e-ab5e4efa7813-kube-api-access-jbcn5\") pod \"neutron-68ddbc68f-gxl56\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.045150 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-public-tls-certs\") pod \"neutron-68ddbc68f-gxl56\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.045301 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-combined-ca-bundle\") pod \"neutron-68ddbc68f-gxl56\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.045340 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-internal-tls-certs\") pod \"neutron-68ddbc68f-gxl56\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.146470 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-config\") pod \"neutron-68ddbc68f-gxl56\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.146515 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-httpd-config\") pod \"neutron-68ddbc68f-gxl56\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.146556 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbcn5\" (UniqueName: \"kubernetes.io/projected/89663bcc-cc29-44ed-a65e-ab5e4efa7813-kube-api-access-jbcn5\") pod \"neutron-68ddbc68f-gxl56\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.146580 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-public-tls-certs\") pod \"neutron-68ddbc68f-gxl56\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.146611 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-combined-ca-bundle\") pod \"neutron-68ddbc68f-gxl56\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.146628 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-internal-tls-certs\") pod \"neutron-68ddbc68f-gxl56\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.146703 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-ovndb-tls-certs\") pod \"neutron-68ddbc68f-gxl56\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.150747 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-ovndb-tls-certs\") pod \"neutron-68ddbc68f-gxl56\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.151543 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-internal-tls-certs\") pod \"neutron-68ddbc68f-gxl56\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.154436 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-httpd-config\") pod \"neutron-68ddbc68f-gxl56\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.154588 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-config\") pod \"neutron-68ddbc68f-gxl56\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.166856 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-combined-ca-bundle\") pod \"neutron-68ddbc68f-gxl56\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.167383 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbcn5\" (UniqueName: \"kubernetes.io/projected/89663bcc-cc29-44ed-a65e-ab5e4efa7813-kube-api-access-jbcn5\") pod \"neutron-68ddbc68f-gxl56\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.175421 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-public-tls-certs\") pod \"neutron-68ddbc68f-gxl56\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.307544 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.474353 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-69d4f5d848-hjbmc"] Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.477914 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.479736 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-xgb7h" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.480755 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.480961 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.481251 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.481707 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.498043 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-69d4f5d848-hjbmc"] Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.557494 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-public-tls-certs\") pod \"placement-69d4f5d848-hjbmc\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.557556 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-combined-ca-bundle\") pod \"placement-69d4f5d848-hjbmc\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.557612 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0898c62-7d0f-447a-84b0-7627b4b78457-logs\") pod \"placement-69d4f5d848-hjbmc\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.557632 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-config-data\") pod \"placement-69d4f5d848-hjbmc\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.557658 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-internal-tls-certs\") pod \"placement-69d4f5d848-hjbmc\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.557709 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-scripts\") pod \"placement-69d4f5d848-hjbmc\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.557733 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwtxv\" (UniqueName: \"kubernetes.io/projected/c0898c62-7d0f-447a-84b0-7627b4b78457-kube-api-access-mwtxv\") pod \"placement-69d4f5d848-hjbmc\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.659253 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0898c62-7d0f-447a-84b0-7627b4b78457-logs\") pod \"placement-69d4f5d848-hjbmc\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.659298 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-config-data\") pod \"placement-69d4f5d848-hjbmc\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.659332 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-internal-tls-certs\") pod \"placement-69d4f5d848-hjbmc\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.659367 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-scripts\") pod \"placement-69d4f5d848-hjbmc\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.659391 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwtxv\" (UniqueName: \"kubernetes.io/projected/c0898c62-7d0f-447a-84b0-7627b4b78457-kube-api-access-mwtxv\") pod \"placement-69d4f5d848-hjbmc\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.659424 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-public-tls-certs\") pod \"placement-69d4f5d848-hjbmc\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.659454 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-combined-ca-bundle\") pod \"placement-69d4f5d848-hjbmc\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.661831 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-77bbb7849c-cvr8p"] Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.664654 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0898c62-7d0f-447a-84b0-7627b4b78457-logs\") pod \"placement-69d4f5d848-hjbmc\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.672725 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-combined-ca-bundle\") pod \"placement-69d4f5d848-hjbmc\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.673830 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-scripts\") pod \"placement-69d4f5d848-hjbmc\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.677964 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-config-data\") pod \"placement-69d4f5d848-hjbmc\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.682641 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-public-tls-certs\") pod \"placement-69d4f5d848-hjbmc\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.684742 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-internal-tls-certs\") pod \"placement-69d4f5d848-hjbmc\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.690108 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-8f89dd846-fztml"] Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.690429 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-77bbb7849c-cvr8p" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.691339 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8f89dd846-fztml" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.692036 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwtxv\" (UniqueName: \"kubernetes.io/projected/c0898c62-7d0f-447a-84b0-7627b4b78457-kube-api-access-mwtxv\") pod \"placement-69d4f5d848-hjbmc\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.694217 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.694484 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.694602 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.694734 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-dbncs" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.694809 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8f89dd846-fztml"] Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.710987 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-77bbb7849c-cvr8p"] Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.760987 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1d72620-a941-44d0-b09a-401e1827f32f-combined-ca-bundle\") pod \"barbican-worker-77bbb7849c-cvr8p\" (UID: \"e1d72620-a941-44d0-b09a-401e1827f32f\") " pod="openstack/barbican-worker-77bbb7849c-cvr8p" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.761021 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1d72620-a941-44d0-b09a-401e1827f32f-config-data-custom\") pod \"barbican-worker-77bbb7849c-cvr8p\" (UID: \"e1d72620-a941-44d0-b09a-401e1827f32f\") " pod="openstack/barbican-worker-77bbb7849c-cvr8p" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.761050 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-combined-ca-bundle\") pod \"barbican-keystone-listener-8f89dd846-fztml\" (UID: \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\") " pod="openstack/barbican-keystone-listener-8f89dd846-fztml" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.761089 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k7r6\" (UniqueName: \"kubernetes.io/projected/e1d72620-a941-44d0-b09a-401e1827f32f-kube-api-access-8k7r6\") pod \"barbican-worker-77bbb7849c-cvr8p\" (UID: \"e1d72620-a941-44d0-b09a-401e1827f32f\") " pod="openstack/barbican-worker-77bbb7849c-cvr8p" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.761110 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1d72620-a941-44d0-b09a-401e1827f32f-logs\") pod \"barbican-worker-77bbb7849c-cvr8p\" (UID: \"e1d72620-a941-44d0-b09a-401e1827f32f\") " pod="openstack/barbican-worker-77bbb7849c-cvr8p" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.761127 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zklz\" (UniqueName: \"kubernetes.io/projected/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-kube-api-access-5zklz\") pod \"barbican-keystone-listener-8f89dd846-fztml\" (UID: \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\") " pod="openstack/barbican-keystone-listener-8f89dd846-fztml" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.761164 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1d72620-a941-44d0-b09a-401e1827f32f-config-data\") pod \"barbican-worker-77bbb7849c-cvr8p\" (UID: \"e1d72620-a941-44d0-b09a-401e1827f32f\") " pod="openstack/barbican-worker-77bbb7849c-cvr8p" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.761180 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-logs\") pod \"barbican-keystone-listener-8f89dd846-fztml\" (UID: \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\") " pod="openstack/barbican-keystone-listener-8f89dd846-fztml" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.761211 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-config-data-custom\") pod \"barbican-keystone-listener-8f89dd846-fztml\" (UID: \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\") " pod="openstack/barbican-keystone-listener-8f89dd846-fztml" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.761240 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-config-data\") pod \"barbican-keystone-listener-8f89dd846-fztml\" (UID: \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\") " pod="openstack/barbican-keystone-listener-8f89dd846-fztml" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.777638 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-mvkgg"] Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.803230 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-qc6dq"] Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.804798 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.806045 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.820104 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-qc6dq"] Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.844375 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6c6f779bd6-5qmvs"] Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.857273 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c6f779bd6-5qmvs"] Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.857387 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c6f779bd6-5qmvs" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.859647 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.864093 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1d72620-a941-44d0-b09a-401e1827f32f-config-data\") pod \"barbican-worker-77bbb7849c-cvr8p\" (UID: \"e1d72620-a941-44d0-b09a-401e1827f32f\") " pod="openstack/barbican-worker-77bbb7849c-cvr8p" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.864133 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-logs\") pod \"barbican-keystone-listener-8f89dd846-fztml\" (UID: \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\") " pod="openstack/barbican-keystone-listener-8f89dd846-fztml" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.864173 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-config-data-custom\") pod \"barbican-keystone-listener-8f89dd846-fztml\" (UID: \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\") " pod="openstack/barbican-keystone-listener-8f89dd846-fztml" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.864208 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-config-data\") pod \"barbican-keystone-listener-8f89dd846-fztml\" (UID: \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\") " pod="openstack/barbican-keystone-listener-8f89dd846-fztml" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.864250 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1d72620-a941-44d0-b09a-401e1827f32f-combined-ca-bundle\") pod \"barbican-worker-77bbb7849c-cvr8p\" (UID: \"e1d72620-a941-44d0-b09a-401e1827f32f\") " pod="openstack/barbican-worker-77bbb7849c-cvr8p" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.864268 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1d72620-a941-44d0-b09a-401e1827f32f-config-data-custom\") pod \"barbican-worker-77bbb7849c-cvr8p\" (UID: \"e1d72620-a941-44d0-b09a-401e1827f32f\") " pod="openstack/barbican-worker-77bbb7849c-cvr8p" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.864292 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-combined-ca-bundle\") pod \"barbican-keystone-listener-8f89dd846-fztml\" (UID: \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\") " pod="openstack/barbican-keystone-listener-8f89dd846-fztml" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.864329 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k7r6\" (UniqueName: \"kubernetes.io/projected/e1d72620-a941-44d0-b09a-401e1827f32f-kube-api-access-8k7r6\") pod \"barbican-worker-77bbb7849c-cvr8p\" (UID: \"e1d72620-a941-44d0-b09a-401e1827f32f\") " pod="openstack/barbican-worker-77bbb7849c-cvr8p" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.864350 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1d72620-a941-44d0-b09a-401e1827f32f-logs\") pod \"barbican-worker-77bbb7849c-cvr8p\" (UID: \"e1d72620-a941-44d0-b09a-401e1827f32f\") " pod="openstack/barbican-worker-77bbb7849c-cvr8p" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.864366 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zklz\" (UniqueName: \"kubernetes.io/projected/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-kube-api-access-5zklz\") pod \"barbican-keystone-listener-8f89dd846-fztml\" (UID: \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\") " pod="openstack/barbican-keystone-listener-8f89dd846-fztml" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.865135 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1d72620-a941-44d0-b09a-401e1827f32f-logs\") pod \"barbican-worker-77bbb7849c-cvr8p\" (UID: \"e1d72620-a941-44d0-b09a-401e1827f32f\") " pod="openstack/barbican-worker-77bbb7849c-cvr8p" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.866167 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-logs\") pod \"barbican-keystone-listener-8f89dd846-fztml\" (UID: \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\") " pod="openstack/barbican-keystone-listener-8f89dd846-fztml" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.870708 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-config-data\") pod \"barbican-keystone-listener-8f89dd846-fztml\" (UID: \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\") " pod="openstack/barbican-keystone-listener-8f89dd846-fztml" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.871592 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-config-data-custom\") pod \"barbican-keystone-listener-8f89dd846-fztml\" (UID: \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\") " pod="openstack/barbican-keystone-listener-8f89dd846-fztml" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.880204 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k7r6\" (UniqueName: \"kubernetes.io/projected/e1d72620-a941-44d0-b09a-401e1827f32f-kube-api-access-8k7r6\") pod \"barbican-worker-77bbb7849c-cvr8p\" (UID: \"e1d72620-a941-44d0-b09a-401e1827f32f\") " pod="openstack/barbican-worker-77bbb7849c-cvr8p" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.882394 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1d72620-a941-44d0-b09a-401e1827f32f-config-data\") pod \"barbican-worker-77bbb7849c-cvr8p\" (UID: \"e1d72620-a941-44d0-b09a-401e1827f32f\") " pod="openstack/barbican-worker-77bbb7849c-cvr8p" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.890635 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-combined-ca-bundle\") pod \"barbican-keystone-listener-8f89dd846-fztml\" (UID: \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\") " pod="openstack/barbican-keystone-listener-8f89dd846-fztml" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.892305 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zklz\" (UniqueName: \"kubernetes.io/projected/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-kube-api-access-5zklz\") pod \"barbican-keystone-listener-8f89dd846-fztml\" (UID: \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\") " pod="openstack/barbican-keystone-listener-8f89dd846-fztml" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.892448 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1d72620-a941-44d0-b09a-401e1827f32f-combined-ca-bundle\") pod \"barbican-worker-77bbb7849c-cvr8p\" (UID: \"e1d72620-a941-44d0-b09a-401e1827f32f\") " pod="openstack/barbican-worker-77bbb7849c-cvr8p" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.893036 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1d72620-a941-44d0-b09a-401e1827f32f-config-data-custom\") pod \"barbican-worker-77bbb7849c-cvr8p\" (UID: \"e1d72620-a941-44d0-b09a-401e1827f32f\") " pod="openstack/barbican-worker-77bbb7849c-cvr8p" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.966534 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20064771-e5a8-4227-b10d-39905587be45-config-data\") pod \"barbican-api-6c6f779bd6-5qmvs\" (UID: \"20064771-e5a8-4227-b10d-39905587be45\") " pod="openstack/barbican-api-6c6f779bd6-5qmvs" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.966627 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-config\") pod \"dnsmasq-dns-6bb684768f-qc6dq\" (UID: \"8273d85f-dd76-4bc7-a50e-54da87bb1927\") " pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.966826 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20064771-e5a8-4227-b10d-39905587be45-combined-ca-bundle\") pod \"barbican-api-6c6f779bd6-5qmvs\" (UID: \"20064771-e5a8-4227-b10d-39905587be45\") " pod="openstack/barbican-api-6c6f779bd6-5qmvs" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.966958 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-qc6dq\" (UID: \"8273d85f-dd76-4bc7-a50e-54da87bb1927\") " pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.966999 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-qc6dq\" (UID: \"8273d85f-dd76-4bc7-a50e-54da87bb1927\") " pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.967030 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnhcl\" (UniqueName: \"kubernetes.io/projected/8273d85f-dd76-4bc7-a50e-54da87bb1927-kube-api-access-bnhcl\") pod \"dnsmasq-dns-6bb684768f-qc6dq\" (UID: \"8273d85f-dd76-4bc7-a50e-54da87bb1927\") " pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.967077 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20064771-e5a8-4227-b10d-39905587be45-config-data-custom\") pod \"barbican-api-6c6f779bd6-5qmvs\" (UID: \"20064771-e5a8-4227-b10d-39905587be45\") " pod="openstack/barbican-api-6c6f779bd6-5qmvs" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.967173 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20064771-e5a8-4227-b10d-39905587be45-logs\") pod \"barbican-api-6c6f779bd6-5qmvs\" (UID: \"20064771-e5a8-4227-b10d-39905587be45\") " pod="openstack/barbican-api-6c6f779bd6-5qmvs" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.967268 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb4hc\" (UniqueName: \"kubernetes.io/projected/20064771-e5a8-4227-b10d-39905587be45-kube-api-access-pb4hc\") pod \"barbican-api-6c6f779bd6-5qmvs\" (UID: \"20064771-e5a8-4227-b10d-39905587be45\") " pod="openstack/barbican-api-6c6f779bd6-5qmvs" Jan 31 04:05:34 crc kubenswrapper[4827]: I0131 04:05:34.967344 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-dns-svc\") pod \"dnsmasq-dns-6bb684768f-qc6dq\" (UID: \"8273d85f-dd76-4bc7-a50e-54da87bb1927\") " pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.069144 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb4hc\" (UniqueName: \"kubernetes.io/projected/20064771-e5a8-4227-b10d-39905587be45-kube-api-access-pb4hc\") pod \"barbican-api-6c6f779bd6-5qmvs\" (UID: \"20064771-e5a8-4227-b10d-39905587be45\") " pod="openstack/barbican-api-6c6f779bd6-5qmvs" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.069222 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-dns-svc\") pod \"dnsmasq-dns-6bb684768f-qc6dq\" (UID: \"8273d85f-dd76-4bc7-a50e-54da87bb1927\") " pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.069455 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20064771-e5a8-4227-b10d-39905587be45-config-data\") pod \"barbican-api-6c6f779bd6-5qmvs\" (UID: \"20064771-e5a8-4227-b10d-39905587be45\") " pod="openstack/barbican-api-6c6f779bd6-5qmvs" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.069563 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-config\") pod \"dnsmasq-dns-6bb684768f-qc6dq\" (UID: \"8273d85f-dd76-4bc7-a50e-54da87bb1927\") " pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.069630 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20064771-e5a8-4227-b10d-39905587be45-combined-ca-bundle\") pod \"barbican-api-6c6f779bd6-5qmvs\" (UID: \"20064771-e5a8-4227-b10d-39905587be45\") " pod="openstack/barbican-api-6c6f779bd6-5qmvs" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.069667 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-qc6dq\" (UID: \"8273d85f-dd76-4bc7-a50e-54da87bb1927\") " pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.069694 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-qc6dq\" (UID: \"8273d85f-dd76-4bc7-a50e-54da87bb1927\") " pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.069713 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnhcl\" (UniqueName: \"kubernetes.io/projected/8273d85f-dd76-4bc7-a50e-54da87bb1927-kube-api-access-bnhcl\") pod \"dnsmasq-dns-6bb684768f-qc6dq\" (UID: \"8273d85f-dd76-4bc7-a50e-54da87bb1927\") " pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.069733 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20064771-e5a8-4227-b10d-39905587be45-config-data-custom\") pod \"barbican-api-6c6f779bd6-5qmvs\" (UID: \"20064771-e5a8-4227-b10d-39905587be45\") " pod="openstack/barbican-api-6c6f779bd6-5qmvs" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.069769 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20064771-e5a8-4227-b10d-39905587be45-logs\") pod \"barbican-api-6c6f779bd6-5qmvs\" (UID: \"20064771-e5a8-4227-b10d-39905587be45\") " pod="openstack/barbican-api-6c6f779bd6-5qmvs" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.070226 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20064771-e5a8-4227-b10d-39905587be45-logs\") pod \"barbican-api-6c6f779bd6-5qmvs\" (UID: \"20064771-e5a8-4227-b10d-39905587be45\") " pod="openstack/barbican-api-6c6f779bd6-5qmvs" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.071100 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-qc6dq\" (UID: \"8273d85f-dd76-4bc7-a50e-54da87bb1927\") " pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.072334 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-config\") pod \"dnsmasq-dns-6bb684768f-qc6dq\" (UID: \"8273d85f-dd76-4bc7-a50e-54da87bb1927\") " pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.073429 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-dns-svc\") pod \"dnsmasq-dns-6bb684768f-qc6dq\" (UID: \"8273d85f-dd76-4bc7-a50e-54da87bb1927\") " pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.074764 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-qc6dq\" (UID: \"8273d85f-dd76-4bc7-a50e-54da87bb1927\") " pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.080415 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20064771-e5a8-4227-b10d-39905587be45-config-data-custom\") pod \"barbican-api-6c6f779bd6-5qmvs\" (UID: \"20064771-e5a8-4227-b10d-39905587be45\") " pod="openstack/barbican-api-6c6f779bd6-5qmvs" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.080769 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-77bbb7849c-cvr8p" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.084543 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20064771-e5a8-4227-b10d-39905587be45-config-data\") pod \"barbican-api-6c6f779bd6-5qmvs\" (UID: \"20064771-e5a8-4227-b10d-39905587be45\") " pod="openstack/barbican-api-6c6f779bd6-5qmvs" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.091727 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20064771-e5a8-4227-b10d-39905587be45-combined-ca-bundle\") pod \"barbican-api-6c6f779bd6-5qmvs\" (UID: \"20064771-e5a8-4227-b10d-39905587be45\") " pod="openstack/barbican-api-6c6f779bd6-5qmvs" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.094770 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnhcl\" (UniqueName: \"kubernetes.io/projected/8273d85f-dd76-4bc7-a50e-54da87bb1927-kube-api-access-bnhcl\") pod \"dnsmasq-dns-6bb684768f-qc6dq\" (UID: \"8273d85f-dd76-4bc7-a50e-54da87bb1927\") " pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.096604 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb4hc\" (UniqueName: \"kubernetes.io/projected/20064771-e5a8-4227-b10d-39905587be45-kube-api-access-pb4hc\") pod \"barbican-api-6c6f779bd6-5qmvs\" (UID: \"20064771-e5a8-4227-b10d-39905587be45\") " pod="openstack/barbican-api-6c6f779bd6-5qmvs" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.099792 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8f89dd846-fztml" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.143338 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" Jan 31 04:05:35 crc kubenswrapper[4827]: I0131 04:05:35.332064 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c6f779bd6-5qmvs" Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.620867 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.715299 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfttb\" (UniqueName: \"kubernetes.io/projected/2653aa61-3396-42b4-8cfe-ae977242f427-kube-api-access-lfttb\") pod \"2653aa61-3396-42b4-8cfe-ae977242f427\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.715643 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-fernet-keys\") pod \"2653aa61-3396-42b4-8cfe-ae977242f427\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.715715 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-scripts\") pod \"2653aa61-3396-42b4-8cfe-ae977242f427\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.715750 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-config-data\") pod \"2653aa61-3396-42b4-8cfe-ae977242f427\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.715787 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-combined-ca-bundle\") pod \"2653aa61-3396-42b4-8cfe-ae977242f427\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.715828 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-credential-keys\") pod \"2653aa61-3396-42b4-8cfe-ae977242f427\" (UID: \"2653aa61-3396-42b4-8cfe-ae977242f427\") " Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.719749 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2653aa61-3396-42b4-8cfe-ae977242f427-kube-api-access-lfttb" (OuterVolumeSpecName: "kube-api-access-lfttb") pod "2653aa61-3396-42b4-8cfe-ae977242f427" (UID: "2653aa61-3396-42b4-8cfe-ae977242f427"). InnerVolumeSpecName "kube-api-access-lfttb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.719850 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2653aa61-3396-42b4-8cfe-ae977242f427" (UID: "2653aa61-3396-42b4-8cfe-ae977242f427"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.723107 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2653aa61-3396-42b4-8cfe-ae977242f427" (UID: "2653aa61-3396-42b4-8cfe-ae977242f427"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.726604 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-scripts" (OuterVolumeSpecName: "scripts") pod "2653aa61-3396-42b4-8cfe-ae977242f427" (UID: "2653aa61-3396-42b4-8cfe-ae977242f427"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.759121 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-config-data" (OuterVolumeSpecName: "config-data") pod "2653aa61-3396-42b4-8cfe-ae977242f427" (UID: "2653aa61-3396-42b4-8cfe-ae977242f427"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.782144 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2653aa61-3396-42b4-8cfe-ae977242f427" (UID: "2653aa61-3396-42b4-8cfe-ae977242f427"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.817807 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.817838 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.817852 4827 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.817864 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfttb\" (UniqueName: \"kubernetes.io/projected/2653aa61-3396-42b4-8cfe-ae977242f427-kube-api-access-lfttb\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.817876 4827 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.818693 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2653aa61-3396-42b4-8cfe-ae977242f427-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.863916 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d86jl" event={"ID":"2653aa61-3396-42b4-8cfe-ae977242f427","Type":"ContainerDied","Data":"e4f7830d196f1f8b026e73e61134874124ff2bc050bb4116454c2dbcfbf9b57b"} Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.863949 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4f7830d196f1f8b026e73e61134874124ff2bc050bb4116454c2dbcfbf9b57b" Jan 31 04:05:37 crc kubenswrapper[4827]: I0131 04:05:37.864012 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d86jl" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.013588 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-mvkgg"] Jan 31 04:05:38 crc kubenswrapper[4827]: W0131 04:05:38.014141 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45243f1e_2824_411d_a544_b5d2e8e099f9.slice/crio-d3f789ba6ebc85d396094487c390295c071217d0352256f35429a6933295ace1 WatchSource:0}: Error finding container d3f789ba6ebc85d396094487c390295c071217d0352256f35429a6933295ace1: Status 404 returned error can't find the container with id d3f789ba6ebc85d396094487c390295c071217d0352256f35429a6933295ace1 Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.380774 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-87c57bc7d-fgwdw"] Jan 31 04:05:38 crc kubenswrapper[4827]: E0131 04:05:38.381532 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2653aa61-3396-42b4-8cfe-ae977242f427" containerName="keystone-bootstrap" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.381557 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="2653aa61-3396-42b4-8cfe-ae977242f427" containerName="keystone-bootstrap" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.381817 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="2653aa61-3396-42b4-8cfe-ae977242f427" containerName="keystone-bootstrap" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.382974 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.385331 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.385541 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.408929 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-87c57bc7d-fgwdw"] Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.508195 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8f89dd846-fztml"] Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.529745 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-config-data-custom\") pod \"barbican-api-87c57bc7d-fgwdw\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.529808 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-public-tls-certs\") pod \"barbican-api-87c57bc7d-fgwdw\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.529854 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-combined-ca-bundle\") pod \"barbican-api-87c57bc7d-fgwdw\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.529977 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-config-data\") pod \"barbican-api-87c57bc7d-fgwdw\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.530005 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-internal-tls-certs\") pod \"barbican-api-87c57bc7d-fgwdw\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.530037 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2379f58c-95e4-4242-93de-82813ecbf089-logs\") pod \"barbican-api-87c57bc7d-fgwdw\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.530056 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6n6t\" (UniqueName: \"kubernetes.io/projected/2379f58c-95e4-4242-93de-82813ecbf089-kube-api-access-n6n6t\") pod \"barbican-api-87c57bc7d-fgwdw\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.535388 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-77bbb7849c-cvr8p"] Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.546928 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c6f779bd6-5qmvs"] Jan 31 04:05:38 crc kubenswrapper[4827]: W0131 04:05:38.562154 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0898c62_7d0f_447a_84b0_7627b4b78457.slice/crio-122b62f3408d2975430c8302aaad839c1d7389fda1f4ef40f64126a9a21a487e WatchSource:0}: Error finding container 122b62f3408d2975430c8302aaad839c1d7389fda1f4ef40f64126a9a21a487e: Status 404 returned error can't find the container with id 122b62f3408d2975430c8302aaad839c1d7389fda1f4ef40f64126a9a21a487e Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.564482 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-69d4f5d848-hjbmc"] Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.581938 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-qc6dq"] Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.590581 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-798dd656d-7874f"] Jan 31 04:05:38 crc kubenswrapper[4827]: W0131 04:05:38.618121 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8273d85f_dd76_4bc7_a50e_54da87bb1927.slice/crio-f87e3866b3423b81f4b8023a7747c1f3104e4e14880faa89eeee55b62ae09be2 WatchSource:0}: Error finding container f87e3866b3423b81f4b8023a7747c1f3104e4e14880faa89eeee55b62ae09be2: Status 404 returned error can't find the container with id f87e3866b3423b81f4b8023a7747c1f3104e4e14880faa89eeee55b62ae09be2 Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.631105 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2379f58c-95e4-4242-93de-82813ecbf089-logs\") pod \"barbican-api-87c57bc7d-fgwdw\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.631141 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6n6t\" (UniqueName: \"kubernetes.io/projected/2379f58c-95e4-4242-93de-82813ecbf089-kube-api-access-n6n6t\") pod \"barbican-api-87c57bc7d-fgwdw\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.631211 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-config-data-custom\") pod \"barbican-api-87c57bc7d-fgwdw\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.631236 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-public-tls-certs\") pod \"barbican-api-87c57bc7d-fgwdw\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.631288 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-combined-ca-bundle\") pod \"barbican-api-87c57bc7d-fgwdw\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.631323 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-config-data\") pod \"barbican-api-87c57bc7d-fgwdw\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.631347 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-internal-tls-certs\") pod \"barbican-api-87c57bc7d-fgwdw\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.632907 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2379f58c-95e4-4242-93de-82813ecbf089-logs\") pod \"barbican-api-87c57bc7d-fgwdw\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.634397 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-internal-tls-certs\") pod \"barbican-api-87c57bc7d-fgwdw\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.635038 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-public-tls-certs\") pod \"barbican-api-87c57bc7d-fgwdw\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.636730 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-config-data-custom\") pod \"barbican-api-87c57bc7d-fgwdw\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.637237 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-config-data\") pod \"barbican-api-87c57bc7d-fgwdw\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.637659 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-combined-ca-bundle\") pod \"barbican-api-87c57bc7d-fgwdw\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.647282 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6n6t\" (UniqueName: \"kubernetes.io/projected/2379f58c-95e4-4242-93de-82813ecbf089-kube-api-access-n6n6t\") pod \"barbican-api-87c57bc7d-fgwdw\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.705844 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.750913 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b7768667c-2kxv5"] Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.751977 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.756556 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.756804 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.758181 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.758326 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.758563 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.761338 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6bbzt" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.773725 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b7768667c-2kxv5"] Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.848359 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/325f82ae-928b-44ea-bef3-e567002d4814-combined-ca-bundle\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.848402 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/325f82ae-928b-44ea-bef3-e567002d4814-internal-tls-certs\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.848430 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/325f82ae-928b-44ea-bef3-e567002d4814-public-tls-certs\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.848462 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/325f82ae-928b-44ea-bef3-e567002d4814-config-data\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.848481 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/325f82ae-928b-44ea-bef3-e567002d4814-fernet-keys\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.848510 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/325f82ae-928b-44ea-bef3-e567002d4814-credential-keys\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.848540 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/325f82ae-928b-44ea-bef3-e567002d4814-scripts\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.848668 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvzq7\" (UniqueName: \"kubernetes.io/projected/325f82ae-928b-44ea-bef3-e567002d4814-kube-api-access-pvzq7\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.877300 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-798dd656d-7874f" event={"ID":"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de","Type":"ContainerStarted","Data":"54957c6c68032f1e6c0c3d084a0359ddaf3175f8981e66edb0c9022270bc8b6b"} Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.878459 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d4f5d848-hjbmc" event={"ID":"c0898c62-7d0f-447a-84b0-7627b4b78457","Type":"ContainerStarted","Data":"122b62f3408d2975430c8302aaad839c1d7389fda1f4ef40f64126a9a21a487e"} Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.879359 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" event={"ID":"8273d85f-dd76-4bc7-a50e-54da87bb1927","Type":"ContainerStarted","Data":"f87e3866b3423b81f4b8023a7747c1f3104e4e14880faa89eeee55b62ae09be2"} Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.881514 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d07cd025-0fc1-4536-978d-f7f7b0df2dbc","Type":"ContainerStarted","Data":"ead723a81fa378183110c6664cd4f76e0c922cdbb27cbfe04a57730874e602b2"} Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.882688 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c6f779bd6-5qmvs" event={"ID":"20064771-e5a8-4227-b10d-39905587be45","Type":"ContainerStarted","Data":"13223d17e3281ee66fede020e4b7ee8d2bdb776a49450bce73de431993a8b47e"} Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.882712 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c6f779bd6-5qmvs" event={"ID":"20064771-e5a8-4227-b10d-39905587be45","Type":"ContainerStarted","Data":"ed6220e1c4de9ed4dfceb02faa00b9719d68be2d92b8f1815e77dcbd0dbd7c64"} Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.883399 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77bbb7849c-cvr8p" event={"ID":"e1d72620-a941-44d0-b09a-401e1827f32f","Type":"ContainerStarted","Data":"cd2bc5467f9cd672dc42657e8b655c072512ca36c2f20d8dc26896e135b5ae5c"} Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.887375 4827 generic.go:334] "Generic (PLEG): container finished" podID="45243f1e-2824-411d-a544-b5d2e8e099f9" containerID="b16b6f42a985e4e577f3d8c44d4e32fb4c872a8b44a5c5d53f43f7cbfc18a230" exitCode=0 Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.887484 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" event={"ID":"45243f1e-2824-411d-a544-b5d2e8e099f9","Type":"ContainerDied","Data":"b16b6f42a985e4e577f3d8c44d4e32fb4c872a8b44a5c5d53f43f7cbfc18a230"} Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.887508 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" event={"ID":"45243f1e-2824-411d-a544-b5d2e8e099f9","Type":"ContainerStarted","Data":"d3f789ba6ebc85d396094487c390295c071217d0352256f35429a6933295ace1"} Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.894581 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8f89dd846-fztml" event={"ID":"235b452b-45b7-41a8-82c0-d3ebe8b4c19f","Type":"ContainerStarted","Data":"08abbe263e83e1ad67b6e432f45476ce299ec7da3854db53a09b115d5ae02250"} Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.952617 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/325f82ae-928b-44ea-bef3-e567002d4814-combined-ca-bundle\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.952649 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/325f82ae-928b-44ea-bef3-e567002d4814-internal-tls-certs\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.952677 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/325f82ae-928b-44ea-bef3-e567002d4814-public-tls-certs\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.952734 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/325f82ae-928b-44ea-bef3-e567002d4814-config-data\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.952751 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/325f82ae-928b-44ea-bef3-e567002d4814-fernet-keys\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.952780 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/325f82ae-928b-44ea-bef3-e567002d4814-credential-keys\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.952810 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/325f82ae-928b-44ea-bef3-e567002d4814-scripts\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.952845 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvzq7\" (UniqueName: \"kubernetes.io/projected/325f82ae-928b-44ea-bef3-e567002d4814-kube-api-access-pvzq7\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.958162 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/325f82ae-928b-44ea-bef3-e567002d4814-public-tls-certs\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.958466 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/325f82ae-928b-44ea-bef3-e567002d4814-combined-ca-bundle\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.958586 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/325f82ae-928b-44ea-bef3-e567002d4814-config-data\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.958612 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/325f82ae-928b-44ea-bef3-e567002d4814-fernet-keys\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.958747 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/325f82ae-928b-44ea-bef3-e567002d4814-credential-keys\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.961736 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/325f82ae-928b-44ea-bef3-e567002d4814-scripts\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.968612 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/325f82ae-928b-44ea-bef3-e567002d4814-internal-tls-certs\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:38 crc kubenswrapper[4827]: I0131 04:05:38.976472 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvzq7\" (UniqueName: \"kubernetes.io/projected/325f82ae-928b-44ea-bef3-e567002d4814-kube-api-access-pvzq7\") pod \"keystone-b7768667c-2kxv5\" (UID: \"325f82ae-928b-44ea-bef3-e567002d4814\") " pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.090802 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.281716 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.318006 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-87c57bc7d-fgwdw"] Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.436895 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68ddbc68f-gxl56"] Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.471469 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-ovsdbserver-nb\") pod \"45243f1e-2824-411d-a544-b5d2e8e099f9\" (UID: \"45243f1e-2824-411d-a544-b5d2e8e099f9\") " Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.471523 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-config\") pod \"45243f1e-2824-411d-a544-b5d2e8e099f9\" (UID: \"45243f1e-2824-411d-a544-b5d2e8e099f9\") " Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.471564 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-dns-svc\") pod \"45243f1e-2824-411d-a544-b5d2e8e099f9\" (UID: \"45243f1e-2824-411d-a544-b5d2e8e099f9\") " Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.471581 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-ovsdbserver-sb\") pod \"45243f1e-2824-411d-a544-b5d2e8e099f9\" (UID: \"45243f1e-2824-411d-a544-b5d2e8e099f9\") " Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.471611 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn7lv\" (UniqueName: \"kubernetes.io/projected/45243f1e-2824-411d-a544-b5d2e8e099f9-kube-api-access-zn7lv\") pod \"45243f1e-2824-411d-a544-b5d2e8e099f9\" (UID: \"45243f1e-2824-411d-a544-b5d2e8e099f9\") " Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.493278 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45243f1e-2824-411d-a544-b5d2e8e099f9-kube-api-access-zn7lv" (OuterVolumeSpecName: "kube-api-access-zn7lv") pod "45243f1e-2824-411d-a544-b5d2e8e099f9" (UID: "45243f1e-2824-411d-a544-b5d2e8e099f9"). InnerVolumeSpecName "kube-api-access-zn7lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.536181 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-config" (OuterVolumeSpecName: "config") pod "45243f1e-2824-411d-a544-b5d2e8e099f9" (UID: "45243f1e-2824-411d-a544-b5d2e8e099f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.537103 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "45243f1e-2824-411d-a544-b5d2e8e099f9" (UID: "45243f1e-2824-411d-a544-b5d2e8e099f9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.551380 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "45243f1e-2824-411d-a544-b5d2e8e099f9" (UID: "45243f1e-2824-411d-a544-b5d2e8e099f9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.560179 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "45243f1e-2824-411d-a544-b5d2e8e099f9" (UID: "45243f1e-2824-411d-a544-b5d2e8e099f9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.574939 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.574971 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.574982 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn7lv\" (UniqueName: \"kubernetes.io/projected/45243f1e-2824-411d-a544-b5d2e8e099f9-kube-api-access-zn7lv\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.574990 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.575002 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45243f1e-2824-411d-a544-b5d2e8e099f9-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.639094 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b7768667c-2kxv5"] Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.719188 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-798dd656d-7874f"] Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.732211 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7969d585-whgv9"] Jan 31 04:05:39 crc kubenswrapper[4827]: E0131 04:05:39.732623 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45243f1e-2824-411d-a544-b5d2e8e099f9" containerName="init" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.732645 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="45243f1e-2824-411d-a544-b5d2e8e099f9" containerName="init" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.739085 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="45243f1e-2824-411d-a544-b5d2e8e099f9" containerName="init" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.746463 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.764636 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7969d585-whgv9"] Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.880832 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae071c1-75f9-40e2-aa1a-69aa0afba58d-ovndb-tls-certs\") pod \"neutron-7969d585-whgv9\" (UID: \"aae071c1-75f9-40e2-aa1a-69aa0afba58d\") " pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.881242 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae071c1-75f9-40e2-aa1a-69aa0afba58d-public-tls-certs\") pod \"neutron-7969d585-whgv9\" (UID: \"aae071c1-75f9-40e2-aa1a-69aa0afba58d\") " pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.881293 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aae071c1-75f9-40e2-aa1a-69aa0afba58d-config\") pod \"neutron-7969d585-whgv9\" (UID: \"aae071c1-75f9-40e2-aa1a-69aa0afba58d\") " pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.881313 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae071c1-75f9-40e2-aa1a-69aa0afba58d-combined-ca-bundle\") pod \"neutron-7969d585-whgv9\" (UID: \"aae071c1-75f9-40e2-aa1a-69aa0afba58d\") " pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.881348 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae071c1-75f9-40e2-aa1a-69aa0afba58d-internal-tls-certs\") pod \"neutron-7969d585-whgv9\" (UID: \"aae071c1-75f9-40e2-aa1a-69aa0afba58d\") " pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.881482 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk46h\" (UniqueName: \"kubernetes.io/projected/aae071c1-75f9-40e2-aa1a-69aa0afba58d-kube-api-access-hk46h\") pod \"neutron-7969d585-whgv9\" (UID: \"aae071c1-75f9-40e2-aa1a-69aa0afba58d\") " pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.881637 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/aae071c1-75f9-40e2-aa1a-69aa0afba58d-httpd-config\") pod \"neutron-7969d585-whgv9\" (UID: \"aae071c1-75f9-40e2-aa1a-69aa0afba58d\") " pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.904595 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-87c57bc7d-fgwdw" event={"ID":"2379f58c-95e4-4242-93de-82813ecbf089","Type":"ContainerStarted","Data":"0f0482c8b6b5c476f2ae4b8e3ab37635ae89223b39ce4f473e9eeaec47e30c76"} Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.906408 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-798dd656d-7874f" event={"ID":"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de","Type":"ContainerStarted","Data":"1715870ab3ad43f98614002eb9c0fbd66bae86854cfd63cd4e476fb8f3179f74"} Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.906433 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-798dd656d-7874f" event={"ID":"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de","Type":"ContainerStarted","Data":"deec999deac645424177fe16ab363dada849390ca10cd2d7eaec06dc147b6df1"} Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.907389 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-798dd656d-7874f" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.909354 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d4f5d848-hjbmc" event={"ID":"c0898c62-7d0f-447a-84b0-7627b4b78457","Type":"ContainerStarted","Data":"b20d2f86ea05804700f4777cbe97ce191a4645edded5f4dcbcc633866e8b2536"} Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.909377 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d4f5d848-hjbmc" event={"ID":"c0898c62-7d0f-447a-84b0-7627b4b78457","Type":"ContainerStarted","Data":"1361b3a6a3130a7bf2a34329ee1f22ef50673a80037a4802feebfce21bc7579e"} Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.909893 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.909921 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.911956 4827 generic.go:334] "Generic (PLEG): container finished" podID="8273d85f-dd76-4bc7-a50e-54da87bb1927" containerID="3f352a0e0b9255260c8938215eff46c200a3b639dc8ce05eb34c629fbf7e37c0" exitCode=0 Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.912003 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" event={"ID":"8273d85f-dd76-4bc7-a50e-54da87bb1927","Type":"ContainerDied","Data":"3f352a0e0b9255260c8938215eff46c200a3b639dc8ce05eb34c629fbf7e37c0"} Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.914118 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68ddbc68f-gxl56" event={"ID":"89663bcc-cc29-44ed-a65e-ab5e4efa7813","Type":"ContainerStarted","Data":"c63d544db6ef2f9ec5dd916553df131f5e2b01a8577e9da64548566e95faa3ea"} Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.914174 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68ddbc68f-gxl56" event={"ID":"89663bcc-cc29-44ed-a65e-ab5e4efa7813","Type":"ContainerStarted","Data":"4076dd8979cb02b9ac86e1ec74a0514a3113f4f37ed2cb8bbdd9048b54b98f9b"} Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.917260 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c6f779bd6-5qmvs" event={"ID":"20064771-e5a8-4227-b10d-39905587be45","Type":"ContainerStarted","Data":"e65bd6854900acdd583a2aab128a8852e0dc5cfdb421ba57b3c6c93d5b027699"} Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.917377 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c6f779bd6-5qmvs" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.917406 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c6f779bd6-5qmvs" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.924023 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" event={"ID":"45243f1e-2824-411d-a544-b5d2e8e099f9","Type":"ContainerDied","Data":"d3f789ba6ebc85d396094487c390295c071217d0352256f35429a6933295ace1"} Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.924070 4827 scope.go:117] "RemoveContainer" containerID="b16b6f42a985e4e577f3d8c44d4e32fb4c872a8b44a5c5d53f43f7cbfc18a230" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.925868 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-mvkgg" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.927990 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-798dd656d-7874f" podStartSLOduration=7.927971148 podStartE2EDuration="7.927971148s" podCreationTimestamp="2026-01-31 04:05:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:39.923770389 +0000 UTC m=+1132.610850838" watchObservedRunningTime="2026-01-31 04:05:39.927971148 +0000 UTC m=+1132.615051597" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.932582 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b7768667c-2kxv5" event={"ID":"325f82ae-928b-44ea-bef3-e567002d4814","Type":"ContainerStarted","Data":"4b579fa453bdcd3f978f44a4c7c7bd2cec62f5cdb6e9b6e43d1111b4a9dff405"} Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.932623 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b7768667c-2kxv5" event={"ID":"325f82ae-928b-44ea-bef3-e567002d4814","Type":"ContainerStarted","Data":"1d125b8315aa27b75be2945fb546127c40a6ff8fc2971f8e6ce92ad5e655652c"} Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.932809 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.953317 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6c6f779bd6-5qmvs" podStartSLOduration=5.953298128 podStartE2EDuration="5.953298128s" podCreationTimestamp="2026-01-31 04:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:39.946273569 +0000 UTC m=+1132.633354018" watchObservedRunningTime="2026-01-31 04:05:39.953298128 +0000 UTC m=+1132.640378567" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.970626 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-69d4f5d848-hjbmc" podStartSLOduration=5.97060878 podStartE2EDuration="5.97060878s" podCreationTimestamp="2026-01-31 04:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:39.964824435 +0000 UTC m=+1132.651904884" watchObservedRunningTime="2026-01-31 04:05:39.97060878 +0000 UTC m=+1132.657689229" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.983493 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/aae071c1-75f9-40e2-aa1a-69aa0afba58d-httpd-config\") pod \"neutron-7969d585-whgv9\" (UID: \"aae071c1-75f9-40e2-aa1a-69aa0afba58d\") " pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.983800 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae071c1-75f9-40e2-aa1a-69aa0afba58d-ovndb-tls-certs\") pod \"neutron-7969d585-whgv9\" (UID: \"aae071c1-75f9-40e2-aa1a-69aa0afba58d\") " pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.983906 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae071c1-75f9-40e2-aa1a-69aa0afba58d-public-tls-certs\") pod \"neutron-7969d585-whgv9\" (UID: \"aae071c1-75f9-40e2-aa1a-69aa0afba58d\") " pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.983923 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aae071c1-75f9-40e2-aa1a-69aa0afba58d-config\") pod \"neutron-7969d585-whgv9\" (UID: \"aae071c1-75f9-40e2-aa1a-69aa0afba58d\") " pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.983940 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae071c1-75f9-40e2-aa1a-69aa0afba58d-combined-ca-bundle\") pod \"neutron-7969d585-whgv9\" (UID: \"aae071c1-75f9-40e2-aa1a-69aa0afba58d\") " pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.983959 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae071c1-75f9-40e2-aa1a-69aa0afba58d-internal-tls-certs\") pod \"neutron-7969d585-whgv9\" (UID: \"aae071c1-75f9-40e2-aa1a-69aa0afba58d\") " pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.983975 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk46h\" (UniqueName: \"kubernetes.io/projected/aae071c1-75f9-40e2-aa1a-69aa0afba58d-kube-api-access-hk46h\") pod \"neutron-7969d585-whgv9\" (UID: \"aae071c1-75f9-40e2-aa1a-69aa0afba58d\") " pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:39 crc kubenswrapper[4827]: I0131 04:05:39.992055 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae071c1-75f9-40e2-aa1a-69aa0afba58d-combined-ca-bundle\") pod \"neutron-7969d585-whgv9\" (UID: \"aae071c1-75f9-40e2-aa1a-69aa0afba58d\") " pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:40 crc kubenswrapper[4827]: I0131 04:05:39.998349 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae071c1-75f9-40e2-aa1a-69aa0afba58d-ovndb-tls-certs\") pod \"neutron-7969d585-whgv9\" (UID: \"aae071c1-75f9-40e2-aa1a-69aa0afba58d\") " pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:40 crc kubenswrapper[4827]: I0131 04:05:39.999072 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/aae071c1-75f9-40e2-aa1a-69aa0afba58d-config\") pod \"neutron-7969d585-whgv9\" (UID: \"aae071c1-75f9-40e2-aa1a-69aa0afba58d\") " pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:40 crc kubenswrapper[4827]: I0131 04:05:39.999370 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/aae071c1-75f9-40e2-aa1a-69aa0afba58d-httpd-config\") pod \"neutron-7969d585-whgv9\" (UID: \"aae071c1-75f9-40e2-aa1a-69aa0afba58d\") " pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:40 crc kubenswrapper[4827]: I0131 04:05:40.001724 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk46h\" (UniqueName: \"kubernetes.io/projected/aae071c1-75f9-40e2-aa1a-69aa0afba58d-kube-api-access-hk46h\") pod \"neutron-7969d585-whgv9\" (UID: \"aae071c1-75f9-40e2-aa1a-69aa0afba58d\") " pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:40 crc kubenswrapper[4827]: I0131 04:05:40.003077 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae071c1-75f9-40e2-aa1a-69aa0afba58d-internal-tls-certs\") pod \"neutron-7969d585-whgv9\" (UID: \"aae071c1-75f9-40e2-aa1a-69aa0afba58d\") " pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:40 crc kubenswrapper[4827]: I0131 04:05:40.003423 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae071c1-75f9-40e2-aa1a-69aa0afba58d-public-tls-certs\") pod \"neutron-7969d585-whgv9\" (UID: \"aae071c1-75f9-40e2-aa1a-69aa0afba58d\") " pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:40 crc kubenswrapper[4827]: I0131 04:05:40.050653 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-mvkgg"] Jan 31 04:05:40 crc kubenswrapper[4827]: I0131 04:05:40.057908 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-mvkgg"] Jan 31 04:05:40 crc kubenswrapper[4827]: I0131 04:05:40.060790 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-b7768667c-2kxv5" podStartSLOduration=2.060774502 podStartE2EDuration="2.060774502s" podCreationTimestamp="2026-01-31 04:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:40.03887189 +0000 UTC m=+1132.725952339" watchObservedRunningTime="2026-01-31 04:05:40.060774502 +0000 UTC m=+1132.747854951" Jan 31 04:05:40 crc kubenswrapper[4827]: I0131 04:05:40.096964 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:40 crc kubenswrapper[4827]: I0131 04:05:40.131839 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45243f1e-2824-411d-a544-b5d2e8e099f9" path="/var/lib/kubelet/pods/45243f1e-2824-411d-a544-b5d2e8e099f9/volumes" Jan 31 04:05:40 crc kubenswrapper[4827]: I0131 04:05:40.946851 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-798dd656d-7874f" podUID="ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de" containerName="neutron-api" containerID="cri-o://deec999deac645424177fe16ab363dada849390ca10cd2d7eaec06dc147b6df1" gracePeriod=30 Jan 31 04:05:40 crc kubenswrapper[4827]: I0131 04:05:40.947157 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-87c57bc7d-fgwdw" event={"ID":"2379f58c-95e4-4242-93de-82813ecbf089","Type":"ContainerStarted","Data":"6f0323ef39c21137a91ef37634c50b08484a3caca7ece0deb69af6fb171b8223"} Jan 31 04:05:40 crc kubenswrapper[4827]: I0131 04:05:40.948971 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-798dd656d-7874f" podUID="ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de" containerName="neutron-httpd" containerID="cri-o://1715870ab3ad43f98614002eb9c0fbd66bae86854cfd63cd4e476fb8f3179f74" gracePeriod=30 Jan 31 04:05:41 crc kubenswrapper[4827]: I0131 04:05:41.960745 4827 generic.go:334] "Generic (PLEG): container finished" podID="ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de" containerID="1715870ab3ad43f98614002eb9c0fbd66bae86854cfd63cd4e476fb8f3179f74" exitCode=0 Jan 31 04:05:41 crc kubenswrapper[4827]: I0131 04:05:41.961100 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-798dd656d-7874f" event={"ID":"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de","Type":"ContainerDied","Data":"1715870ab3ad43f98614002eb9c0fbd66bae86854cfd63cd4e476fb8f3179f74"} Jan 31 04:05:42 crc kubenswrapper[4827]: I0131 04:05:42.524703 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7969d585-whgv9"] Jan 31 04:05:42 crc kubenswrapper[4827]: W0131 04:05:42.556175 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaae071c1_75f9_40e2_aa1a_69aa0afba58d.slice/crio-8b5b78bcde814022031599307cc6f57d1923c52afb3fc2439e92a27b880b6e32 WatchSource:0}: Error finding container 8b5b78bcde814022031599307cc6f57d1923c52afb3fc2439e92a27b880b6e32: Status 404 returned error can't find the container with id 8b5b78bcde814022031599307cc6f57d1923c52afb3fc2439e92a27b880b6e32 Jan 31 04:05:42 crc kubenswrapper[4827]: I0131 04:05:42.978554 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68ddbc68f-gxl56" event={"ID":"89663bcc-cc29-44ed-a65e-ab5e4efa7813","Type":"ContainerStarted","Data":"ada02e27206a0f8f3e8255ff494c5a081cceaa808a4124f6957c499782b5fa5b"} Jan 31 04:05:42 crc kubenswrapper[4827]: I0131 04:05:42.978834 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:05:42 crc kubenswrapper[4827]: I0131 04:05:42.983104 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77bbb7849c-cvr8p" event={"ID":"e1d72620-a941-44d0-b09a-401e1827f32f","Type":"ContainerStarted","Data":"77b72291392212ba7d5663aba4e274c6c3a6c60ccc03e61bc94f0eb194070372"} Jan 31 04:05:42 crc kubenswrapper[4827]: I0131 04:05:42.983130 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77bbb7849c-cvr8p" event={"ID":"e1d72620-a941-44d0-b09a-401e1827f32f","Type":"ContainerStarted","Data":"7a281abc16213495565a99d8a9ab5e3e6460c7de2d03dcc68200d40b4559bbfa"} Jan 31 04:05:42 crc kubenswrapper[4827]: I0131 04:05:42.985207 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8f89dd846-fztml" event={"ID":"235b452b-45b7-41a8-82c0-d3ebe8b4c19f","Type":"ContainerStarted","Data":"7819495fe882427ed9603a9707e883cfab983be03333d364e6537d82847eb1c9"} Jan 31 04:05:42 crc kubenswrapper[4827]: I0131 04:05:42.985249 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8f89dd846-fztml" event={"ID":"235b452b-45b7-41a8-82c0-d3ebe8b4c19f","Type":"ContainerStarted","Data":"768d0a36af4b243fa7ff1c0f8fb930b60bb8002776d8d7705c115cd760407d8b"} Jan 31 04:05:42 crc kubenswrapper[4827]: I0131 04:05:42.986860 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-87c57bc7d-fgwdw" event={"ID":"2379f58c-95e4-4242-93de-82813ecbf089","Type":"ContainerStarted","Data":"02091ba400a30568eb018890f1d435f65daf0a762f5dd961b208271015015ce4"} Jan 31 04:05:42 crc kubenswrapper[4827]: I0131 04:05:42.987016 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:42 crc kubenswrapper[4827]: I0131 04:05:42.987050 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:42 crc kubenswrapper[4827]: I0131 04:05:42.988772 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7969d585-whgv9" event={"ID":"aae071c1-75f9-40e2-aa1a-69aa0afba58d","Type":"ContainerStarted","Data":"8b5b78bcde814022031599307cc6f57d1923c52afb3fc2439e92a27b880b6e32"} Jan 31 04:05:42 crc kubenswrapper[4827]: I0131 04:05:42.991761 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" event={"ID":"8273d85f-dd76-4bc7-a50e-54da87bb1927","Type":"ContainerStarted","Data":"9e3910077181168834d3cc810b118195fab67b26c9b4c356c3c6473b4142581d"} Jan 31 04:05:42 crc kubenswrapper[4827]: I0131 04:05:42.991925 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.025600 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-68ddbc68f-gxl56" podStartSLOduration=10.025583208 podStartE2EDuration="10.025583208s" podCreationTimestamp="2026-01-31 04:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:43.014902395 +0000 UTC m=+1135.701982844" watchObservedRunningTime="2026-01-31 04:05:43.025583208 +0000 UTC m=+1135.712663657" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.062787 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" podStartSLOduration=9.062771474 podStartE2EDuration="9.062771474s" podCreationTimestamp="2026-01-31 04:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:43.058549425 +0000 UTC m=+1135.745629874" watchObservedRunningTime="2026-01-31 04:05:43.062771474 +0000 UTC m=+1135.749851923" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.065125 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-77bbb7849c-cvr8p" podStartSLOduration=5.591346745 podStartE2EDuration="9.065115212s" podCreationTimestamp="2026-01-31 04:05:34 +0000 UTC" firstStartedPulling="2026-01-31 04:05:38.526765027 +0000 UTC m=+1131.213845476" lastFinishedPulling="2026-01-31 04:05:42.000533464 +0000 UTC m=+1134.687613943" observedRunningTime="2026-01-31 04:05:43.034489042 +0000 UTC m=+1135.721569491" watchObservedRunningTime="2026-01-31 04:05:43.065115212 +0000 UTC m=+1135.752195661" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.084785 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-87c57bc7d-fgwdw" podStartSLOduration=5.08476609 podStartE2EDuration="5.08476609s" podCreationTimestamp="2026-01-31 04:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:43.080483848 +0000 UTC m=+1135.767564297" watchObservedRunningTime="2026-01-31 04:05:43.08476609 +0000 UTC m=+1135.771846539" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.150769 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-8f89dd846-fztml" podStartSLOduration=5.679509759 podStartE2EDuration="9.150743764s" podCreationTimestamp="2026-01-31 04:05:34 +0000 UTC" firstStartedPulling="2026-01-31 04:05:38.520974783 +0000 UTC m=+1131.208055232" lastFinishedPulling="2026-01-31 04:05:41.992208778 +0000 UTC m=+1134.679289237" observedRunningTime="2026-01-31 04:05:43.101339741 +0000 UTC m=+1135.788420190" watchObservedRunningTime="2026-01-31 04:05:43.150743764 +0000 UTC m=+1135.837824213" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.168847 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7c79fbcb95-qrncz"] Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.170580 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7c79fbcb95-qrncz" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.177769 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-649fbbf9d6-vkg84"] Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.179551 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-649fbbf9d6-vkg84" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.196627 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7c79fbcb95-qrncz"] Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.256415 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-649fbbf9d6-vkg84"] Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.337426 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6c6f779bd6-5qmvs"] Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.339165 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6c6f779bd6-5qmvs" podUID="20064771-e5a8-4227-b10d-39905587be45" containerName="barbican-api-log" containerID="cri-o://13223d17e3281ee66fede020e4b7ee8d2bdb776a49450bce73de431993a8b47e" gracePeriod=30 Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.339861 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6c6f779bd6-5qmvs" podUID="20064771-e5a8-4227-b10d-39905587be45" containerName="barbican-api" containerID="cri-o://e65bd6854900acdd583a2aab128a8852e0dc5cfdb421ba57b3c6c93d5b027699" gracePeriod=30 Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.349668 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6c6f779bd6-5qmvs" podUID="20064771-e5a8-4227-b10d-39905587be45" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": EOF" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.357544 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-74bf46887d-nb5df"] Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.360572 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.364344 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74bf46887d-nb5df"] Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.393510 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1aadec-fcb7-428e-9020-e424c393f018-combined-ca-bundle\") pod \"barbican-keystone-listener-649fbbf9d6-vkg84\" (UID: \"ac1aadec-fcb7-428e-9020-e424c393f018\") " pod="openstack/barbican-keystone-listener-649fbbf9d6-vkg84" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.393547 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac1aadec-fcb7-428e-9020-e424c393f018-config-data\") pod \"barbican-keystone-listener-649fbbf9d6-vkg84\" (UID: \"ac1aadec-fcb7-428e-9020-e424c393f018\") " pod="openstack/barbican-keystone-listener-649fbbf9d6-vkg84" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.393570 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e83ad5c9-edbb-4764-b932-52810f0f57ac-logs\") pod \"barbican-worker-7c79fbcb95-qrncz\" (UID: \"e83ad5c9-edbb-4764-b932-52810f0f57ac\") " pod="openstack/barbican-worker-7c79fbcb95-qrncz" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.393658 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e83ad5c9-edbb-4764-b932-52810f0f57ac-combined-ca-bundle\") pod \"barbican-worker-7c79fbcb95-qrncz\" (UID: \"e83ad5c9-edbb-4764-b932-52810f0f57ac\") " pod="openstack/barbican-worker-7c79fbcb95-qrncz" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.393681 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rlmg\" (UniqueName: \"kubernetes.io/projected/ac1aadec-fcb7-428e-9020-e424c393f018-kube-api-access-9rlmg\") pod \"barbican-keystone-listener-649fbbf9d6-vkg84\" (UID: \"ac1aadec-fcb7-428e-9020-e424c393f018\") " pod="openstack/barbican-keystone-listener-649fbbf9d6-vkg84" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.393708 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dpdz\" (UniqueName: \"kubernetes.io/projected/e83ad5c9-edbb-4764-b932-52810f0f57ac-kube-api-access-4dpdz\") pod \"barbican-worker-7c79fbcb95-qrncz\" (UID: \"e83ad5c9-edbb-4764-b932-52810f0f57ac\") " pod="openstack/barbican-worker-7c79fbcb95-qrncz" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.393739 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac1aadec-fcb7-428e-9020-e424c393f018-config-data-custom\") pod \"barbican-keystone-listener-649fbbf9d6-vkg84\" (UID: \"ac1aadec-fcb7-428e-9020-e424c393f018\") " pod="openstack/barbican-keystone-listener-649fbbf9d6-vkg84" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.393766 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e83ad5c9-edbb-4764-b932-52810f0f57ac-config-data-custom\") pod \"barbican-worker-7c79fbcb95-qrncz\" (UID: \"e83ad5c9-edbb-4764-b932-52810f0f57ac\") " pod="openstack/barbican-worker-7c79fbcb95-qrncz" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.393794 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac1aadec-fcb7-428e-9020-e424c393f018-logs\") pod \"barbican-keystone-listener-649fbbf9d6-vkg84\" (UID: \"ac1aadec-fcb7-428e-9020-e424c393f018\") " pod="openstack/barbican-keystone-listener-649fbbf9d6-vkg84" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.393817 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e83ad5c9-edbb-4764-b932-52810f0f57ac-config-data\") pod \"barbican-worker-7c79fbcb95-qrncz\" (UID: \"e83ad5c9-edbb-4764-b932-52810f0f57ac\") " pod="openstack/barbican-worker-7c79fbcb95-qrncz" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.495628 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5530571-a0ae-4835-809e-0dab61573e8c-config-data-custom\") pod \"barbican-api-74bf46887d-nb5df\" (UID: \"d5530571-a0ae-4835-809e-0dab61573e8c\") " pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.495679 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5530571-a0ae-4835-809e-0dab61573e8c-config-data\") pod \"barbican-api-74bf46887d-nb5df\" (UID: \"d5530571-a0ae-4835-809e-0dab61573e8c\") " pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.495706 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e83ad5c9-edbb-4764-b932-52810f0f57ac-combined-ca-bundle\") pod \"barbican-worker-7c79fbcb95-qrncz\" (UID: \"e83ad5c9-edbb-4764-b932-52810f0f57ac\") " pod="openstack/barbican-worker-7c79fbcb95-qrncz" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.496269 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rlmg\" (UniqueName: \"kubernetes.io/projected/ac1aadec-fcb7-428e-9020-e424c393f018-kube-api-access-9rlmg\") pod \"barbican-keystone-listener-649fbbf9d6-vkg84\" (UID: \"ac1aadec-fcb7-428e-9020-e424c393f018\") " pod="openstack/barbican-keystone-listener-649fbbf9d6-vkg84" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.496398 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dpdz\" (UniqueName: \"kubernetes.io/projected/e83ad5c9-edbb-4764-b932-52810f0f57ac-kube-api-access-4dpdz\") pod \"barbican-worker-7c79fbcb95-qrncz\" (UID: \"e83ad5c9-edbb-4764-b932-52810f0f57ac\") " pod="openstack/barbican-worker-7c79fbcb95-qrncz" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.496506 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac1aadec-fcb7-428e-9020-e424c393f018-config-data-custom\") pod \"barbican-keystone-listener-649fbbf9d6-vkg84\" (UID: \"ac1aadec-fcb7-428e-9020-e424c393f018\") " pod="openstack/barbican-keystone-listener-649fbbf9d6-vkg84" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.496561 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5530571-a0ae-4835-809e-0dab61573e8c-public-tls-certs\") pod \"barbican-api-74bf46887d-nb5df\" (UID: \"d5530571-a0ae-4835-809e-0dab61573e8c\") " pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.496607 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e83ad5c9-edbb-4764-b932-52810f0f57ac-config-data-custom\") pod \"barbican-worker-7c79fbcb95-qrncz\" (UID: \"e83ad5c9-edbb-4764-b932-52810f0f57ac\") " pod="openstack/barbican-worker-7c79fbcb95-qrncz" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.496673 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac1aadec-fcb7-428e-9020-e424c393f018-logs\") pod \"barbican-keystone-listener-649fbbf9d6-vkg84\" (UID: \"ac1aadec-fcb7-428e-9020-e424c393f018\") " pod="openstack/barbican-keystone-listener-649fbbf9d6-vkg84" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.496705 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5530571-a0ae-4835-809e-0dab61573e8c-internal-tls-certs\") pod \"barbican-api-74bf46887d-nb5df\" (UID: \"d5530571-a0ae-4835-809e-0dab61573e8c\") " pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.496738 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e83ad5c9-edbb-4764-b932-52810f0f57ac-config-data\") pod \"barbican-worker-7c79fbcb95-qrncz\" (UID: \"e83ad5c9-edbb-4764-b932-52810f0f57ac\") " pod="openstack/barbican-worker-7c79fbcb95-qrncz" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.496791 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7tpv\" (UniqueName: \"kubernetes.io/projected/d5530571-a0ae-4835-809e-0dab61573e8c-kube-api-access-h7tpv\") pod \"barbican-api-74bf46887d-nb5df\" (UID: \"d5530571-a0ae-4835-809e-0dab61573e8c\") " pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.496815 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1aadec-fcb7-428e-9020-e424c393f018-combined-ca-bundle\") pod \"barbican-keystone-listener-649fbbf9d6-vkg84\" (UID: \"ac1aadec-fcb7-428e-9020-e424c393f018\") " pod="openstack/barbican-keystone-listener-649fbbf9d6-vkg84" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.496837 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac1aadec-fcb7-428e-9020-e424c393f018-config-data\") pod \"barbican-keystone-listener-649fbbf9d6-vkg84\" (UID: \"ac1aadec-fcb7-428e-9020-e424c393f018\") " pod="openstack/barbican-keystone-listener-649fbbf9d6-vkg84" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.496861 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e83ad5c9-edbb-4764-b932-52810f0f57ac-logs\") pod \"barbican-worker-7c79fbcb95-qrncz\" (UID: \"e83ad5c9-edbb-4764-b932-52810f0f57ac\") " pod="openstack/barbican-worker-7c79fbcb95-qrncz" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.496900 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5530571-a0ae-4835-809e-0dab61573e8c-logs\") pod \"barbican-api-74bf46887d-nb5df\" (UID: \"d5530571-a0ae-4835-809e-0dab61573e8c\") " pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.496941 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5530571-a0ae-4835-809e-0dab61573e8c-combined-ca-bundle\") pod \"barbican-api-74bf46887d-nb5df\" (UID: \"d5530571-a0ae-4835-809e-0dab61573e8c\") " pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.497104 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac1aadec-fcb7-428e-9020-e424c393f018-logs\") pod \"barbican-keystone-listener-649fbbf9d6-vkg84\" (UID: \"ac1aadec-fcb7-428e-9020-e424c393f018\") " pod="openstack/barbican-keystone-listener-649fbbf9d6-vkg84" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.497396 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e83ad5c9-edbb-4764-b932-52810f0f57ac-logs\") pod \"barbican-worker-7c79fbcb95-qrncz\" (UID: \"e83ad5c9-edbb-4764-b932-52810f0f57ac\") " pod="openstack/barbican-worker-7c79fbcb95-qrncz" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.503393 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac1aadec-fcb7-428e-9020-e424c393f018-config-data-custom\") pod \"barbican-keystone-listener-649fbbf9d6-vkg84\" (UID: \"ac1aadec-fcb7-428e-9020-e424c393f018\") " pod="openstack/barbican-keystone-listener-649fbbf9d6-vkg84" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.503456 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e83ad5c9-edbb-4764-b932-52810f0f57ac-combined-ca-bundle\") pod \"barbican-worker-7c79fbcb95-qrncz\" (UID: \"e83ad5c9-edbb-4764-b932-52810f0f57ac\") " pod="openstack/barbican-worker-7c79fbcb95-qrncz" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.503726 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e83ad5c9-edbb-4764-b932-52810f0f57ac-config-data\") pod \"barbican-worker-7c79fbcb95-qrncz\" (UID: \"e83ad5c9-edbb-4764-b932-52810f0f57ac\") " pod="openstack/barbican-worker-7c79fbcb95-qrncz" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.505195 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac1aadec-fcb7-428e-9020-e424c393f018-config-data\") pod \"barbican-keystone-listener-649fbbf9d6-vkg84\" (UID: \"ac1aadec-fcb7-428e-9020-e424c393f018\") " pod="openstack/barbican-keystone-listener-649fbbf9d6-vkg84" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.512721 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e83ad5c9-edbb-4764-b932-52810f0f57ac-config-data-custom\") pod \"barbican-worker-7c79fbcb95-qrncz\" (UID: \"e83ad5c9-edbb-4764-b932-52810f0f57ac\") " pod="openstack/barbican-worker-7c79fbcb95-qrncz" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.515428 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1aadec-fcb7-428e-9020-e424c393f018-combined-ca-bundle\") pod \"barbican-keystone-listener-649fbbf9d6-vkg84\" (UID: \"ac1aadec-fcb7-428e-9020-e424c393f018\") " pod="openstack/barbican-keystone-listener-649fbbf9d6-vkg84" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.517495 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rlmg\" (UniqueName: \"kubernetes.io/projected/ac1aadec-fcb7-428e-9020-e424c393f018-kube-api-access-9rlmg\") pod \"barbican-keystone-listener-649fbbf9d6-vkg84\" (UID: \"ac1aadec-fcb7-428e-9020-e424c393f018\") " pod="openstack/barbican-keystone-listener-649fbbf9d6-vkg84" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.518409 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dpdz\" (UniqueName: \"kubernetes.io/projected/e83ad5c9-edbb-4764-b932-52810f0f57ac-kube-api-access-4dpdz\") pod \"barbican-worker-7c79fbcb95-qrncz\" (UID: \"e83ad5c9-edbb-4764-b932-52810f0f57ac\") " pod="openstack/barbican-worker-7c79fbcb95-qrncz" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.598012 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5530571-a0ae-4835-809e-0dab61573e8c-public-tls-certs\") pod \"barbican-api-74bf46887d-nb5df\" (UID: \"d5530571-a0ae-4835-809e-0dab61573e8c\") " pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.598382 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5530571-a0ae-4835-809e-0dab61573e8c-internal-tls-certs\") pod \"barbican-api-74bf46887d-nb5df\" (UID: \"d5530571-a0ae-4835-809e-0dab61573e8c\") " pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.598420 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7tpv\" (UniqueName: \"kubernetes.io/projected/d5530571-a0ae-4835-809e-0dab61573e8c-kube-api-access-h7tpv\") pod \"barbican-api-74bf46887d-nb5df\" (UID: \"d5530571-a0ae-4835-809e-0dab61573e8c\") " pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.598445 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5530571-a0ae-4835-809e-0dab61573e8c-logs\") pod \"barbican-api-74bf46887d-nb5df\" (UID: \"d5530571-a0ae-4835-809e-0dab61573e8c\") " pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.598549 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5530571-a0ae-4835-809e-0dab61573e8c-combined-ca-bundle\") pod \"barbican-api-74bf46887d-nb5df\" (UID: \"d5530571-a0ae-4835-809e-0dab61573e8c\") " pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.598614 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5530571-a0ae-4835-809e-0dab61573e8c-config-data-custom\") pod \"barbican-api-74bf46887d-nb5df\" (UID: \"d5530571-a0ae-4835-809e-0dab61573e8c\") " pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.598637 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5530571-a0ae-4835-809e-0dab61573e8c-config-data\") pod \"barbican-api-74bf46887d-nb5df\" (UID: \"d5530571-a0ae-4835-809e-0dab61573e8c\") " pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.599325 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5530571-a0ae-4835-809e-0dab61573e8c-logs\") pod \"barbican-api-74bf46887d-nb5df\" (UID: \"d5530571-a0ae-4835-809e-0dab61573e8c\") " pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.606282 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5530571-a0ae-4835-809e-0dab61573e8c-public-tls-certs\") pod \"barbican-api-74bf46887d-nb5df\" (UID: \"d5530571-a0ae-4835-809e-0dab61573e8c\") " pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.606309 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5530571-a0ae-4835-809e-0dab61573e8c-config-data-custom\") pod \"barbican-api-74bf46887d-nb5df\" (UID: \"d5530571-a0ae-4835-809e-0dab61573e8c\") " pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.607989 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5530571-a0ae-4835-809e-0dab61573e8c-config-data\") pod \"barbican-api-74bf46887d-nb5df\" (UID: \"d5530571-a0ae-4835-809e-0dab61573e8c\") " pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.611120 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5530571-a0ae-4835-809e-0dab61573e8c-combined-ca-bundle\") pod \"barbican-api-74bf46887d-nb5df\" (UID: \"d5530571-a0ae-4835-809e-0dab61573e8c\") " pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.612263 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5530571-a0ae-4835-809e-0dab61573e8c-internal-tls-certs\") pod \"barbican-api-74bf46887d-nb5df\" (UID: \"d5530571-a0ae-4835-809e-0dab61573e8c\") " pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.614662 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7tpv\" (UniqueName: \"kubernetes.io/projected/d5530571-a0ae-4835-809e-0dab61573e8c-kube-api-access-h7tpv\") pod \"barbican-api-74bf46887d-nb5df\" (UID: \"d5530571-a0ae-4835-809e-0dab61573e8c\") " pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.689319 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.792252 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7c79fbcb95-qrncz" Jan 31 04:05:43 crc kubenswrapper[4827]: I0131 04:05:43.813525 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-649fbbf9d6-vkg84" Jan 31 04:05:44 crc kubenswrapper[4827]: I0131 04:05:44.032586 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7969d585-whgv9" event={"ID":"aae071c1-75f9-40e2-aa1a-69aa0afba58d","Type":"ContainerStarted","Data":"f35c1257584bb2f4166f1de2856915e6d5ae191eed7e65b690880d53869faf9b"} Jan 31 04:05:44 crc kubenswrapper[4827]: I0131 04:05:44.032844 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7969d585-whgv9" event={"ID":"aae071c1-75f9-40e2-aa1a-69aa0afba58d","Type":"ContainerStarted","Data":"7acabb955471a69be8e3cc0b2ea2d279b9a680d1caf22122e88114c2d9399ff6"} Jan 31 04:05:44 crc kubenswrapper[4827]: I0131 04:05:44.033835 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7969d585-whgv9" Jan 31 04:05:44 crc kubenswrapper[4827]: I0131 04:05:44.069309 4827 generic.go:334] "Generic (PLEG): container finished" podID="20064771-e5a8-4227-b10d-39905587be45" containerID="13223d17e3281ee66fede020e4b7ee8d2bdb776a49450bce73de431993a8b47e" exitCode=143 Jan 31 04:05:44 crc kubenswrapper[4827]: I0131 04:05:44.070133 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c6f779bd6-5qmvs" event={"ID":"20064771-e5a8-4227-b10d-39905587be45","Type":"ContainerDied","Data":"13223d17e3281ee66fede020e4b7ee8d2bdb776a49450bce73de431993a8b47e"} Jan 31 04:05:44 crc kubenswrapper[4827]: I0131 04:05:44.077709 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7969d585-whgv9" podStartSLOduration=5.077690371 podStartE2EDuration="5.077690371s" podCreationTimestamp="2026-01-31 04:05:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:44.072421681 +0000 UTC m=+1136.759502140" watchObservedRunningTime="2026-01-31 04:05:44.077690371 +0000 UTC m=+1136.764770820" Jan 31 04:05:44 crc kubenswrapper[4827]: I0131 04:05:44.298175 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74bf46887d-nb5df"] Jan 31 04:05:44 crc kubenswrapper[4827]: I0131 04:05:44.485031 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7c79fbcb95-qrncz"] Jan 31 04:05:44 crc kubenswrapper[4827]: I0131 04:05:44.622854 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-649fbbf9d6-vkg84"] Jan 31 04:05:45 crc kubenswrapper[4827]: I0131 04:05:45.081342 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-48bfh" event={"ID":"3da5eeb9-641c-4b43-a3c9-eb4860e9995b","Type":"ContainerStarted","Data":"06beff70b7f8432840557efe615f1d2992319d6afd7b0185b8a5b8fefc27ba3e"} Jan 31 04:05:45 crc kubenswrapper[4827]: I0131 04:05:45.084742 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-649fbbf9d6-vkg84" event={"ID":"ac1aadec-fcb7-428e-9020-e424c393f018","Type":"ContainerStarted","Data":"2019a3a182dc70a4ed80b87fc74a99f4db4d8cae4a05504f0d4ed93a3a857d89"} Jan 31 04:05:45 crc kubenswrapper[4827]: I0131 04:05:45.084789 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-649fbbf9d6-vkg84" event={"ID":"ac1aadec-fcb7-428e-9020-e424c393f018","Type":"ContainerStarted","Data":"189bdf5807197095137413946e1846fdfbb9ba6fe85c4423213f5257be80d999"} Jan 31 04:05:45 crc kubenswrapper[4827]: I0131 04:05:45.087732 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c79fbcb95-qrncz" event={"ID":"e83ad5c9-edbb-4764-b932-52810f0f57ac","Type":"ContainerStarted","Data":"2a64b9c006a883732eef1391b50139c6fe3b07cfa2708d2cc9e6d158877c4b29"} Jan 31 04:05:45 crc kubenswrapper[4827]: I0131 04:05:45.087779 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c79fbcb95-qrncz" event={"ID":"e83ad5c9-edbb-4764-b932-52810f0f57ac","Type":"ContainerStarted","Data":"9e1ed8c65ded0386ad23e13f3d68918de4921d68e78917ccff74684fcc4706b0"} Jan 31 04:05:45 crc kubenswrapper[4827]: I0131 04:05:45.087789 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c79fbcb95-qrncz" event={"ID":"e83ad5c9-edbb-4764-b932-52810f0f57ac","Type":"ContainerStarted","Data":"bbefab8aba8906cda2b067874440ecdf7eac7538adcef33e2aeb2fb400c14a1d"} Jan 31 04:05:45 crc kubenswrapper[4827]: I0131 04:05:45.113605 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74bf46887d-nb5df" event={"ID":"d5530571-a0ae-4835-809e-0dab61573e8c","Type":"ContainerStarted","Data":"1aa8d1d45763c93f730ef2f73d1d6912e8e69501ff7f5a6eb25a84e05191b725"} Jan 31 04:05:45 crc kubenswrapper[4827]: I0131 04:05:45.113651 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:45 crc kubenswrapper[4827]: I0131 04:05:45.113665 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:45 crc kubenswrapper[4827]: I0131 04:05:45.113685 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74bf46887d-nb5df" event={"ID":"d5530571-a0ae-4835-809e-0dab61573e8c","Type":"ContainerStarted","Data":"a7c24b330ab01a78eaa7b81462403b0d7bac0071f17335f2ebb774279bcf9324"} Jan 31 04:05:45 crc kubenswrapper[4827]: I0131 04:05:45.113697 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74bf46887d-nb5df" event={"ID":"d5530571-a0ae-4835-809e-0dab61573e8c","Type":"ContainerStarted","Data":"7685f9c5cc0d213d8ea58d8e015ebfc1d5daf6c440a08b29a822cf41c6f4348f"} Jan 31 04:05:45 crc kubenswrapper[4827]: I0131 04:05:45.114487 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-48bfh" podStartSLOduration=4.08705158 podStartE2EDuration="40.114466508s" podCreationTimestamp="2026-01-31 04:05:05 +0000 UTC" firstStartedPulling="2026-01-31 04:05:07.555278669 +0000 UTC m=+1100.242359118" lastFinishedPulling="2026-01-31 04:05:43.582693587 +0000 UTC m=+1136.269774046" observedRunningTime="2026-01-31 04:05:45.107196011 +0000 UTC m=+1137.794276470" watchObservedRunningTime="2026-01-31 04:05:45.114466508 +0000 UTC m=+1137.801546957" Jan 31 04:05:45 crc kubenswrapper[4827]: I0131 04:05:45.135434 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7c79fbcb95-qrncz" podStartSLOduration=2.135411453 podStartE2EDuration="2.135411453s" podCreationTimestamp="2026-01-31 04:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:45.128413664 +0000 UTC m=+1137.815494113" watchObservedRunningTime="2026-01-31 04:05:45.135411453 +0000 UTC m=+1137.822491902" Jan 31 04:05:45 crc kubenswrapper[4827]: I0131 04:05:45.172002 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-77bbb7849c-cvr8p"] Jan 31 04:05:45 crc kubenswrapper[4827]: I0131 04:05:45.172204 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-77bbb7849c-cvr8p" podUID="e1d72620-a941-44d0-b09a-401e1827f32f" containerName="barbican-worker-log" containerID="cri-o://7a281abc16213495565a99d8a9ab5e3e6460c7de2d03dcc68200d40b4559bbfa" gracePeriod=30 Jan 31 04:05:45 crc kubenswrapper[4827]: I0131 04:05:45.172469 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-77bbb7849c-cvr8p" podUID="e1d72620-a941-44d0-b09a-401e1827f32f" containerName="barbican-worker" containerID="cri-o://77b72291392212ba7d5663aba4e274c6c3a6c60ccc03e61bc94f0eb194070372" gracePeriod=30 Jan 31 04:05:45 crc kubenswrapper[4827]: I0131 04:05:45.195791 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-74bf46887d-nb5df" podStartSLOduration=2.195774527 podStartE2EDuration="2.195774527s" podCreationTimestamp="2026-01-31 04:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:45.156522593 +0000 UTC m=+1137.843603042" watchObservedRunningTime="2026-01-31 04:05:45.195774527 +0000 UTC m=+1137.882854966" Jan 31 04:05:45 crc kubenswrapper[4827]: E0131 04:05:45.467757 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1d72620_a941_44d0_b09a_401e1827f32f.slice/crio-7a281abc16213495565a99d8a9ab5e3e6460c7de2d03dcc68200d40b4559bbfa.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1d72620_a941_44d0_b09a_401e1827f32f.slice/crio-conmon-7a281abc16213495565a99d8a9ab5e3e6460c7de2d03dcc68200d40b4559bbfa.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.143283 4827 generic.go:334] "Generic (PLEG): container finished" podID="e1d72620-a941-44d0-b09a-401e1827f32f" containerID="77b72291392212ba7d5663aba4e274c6c3a6c60ccc03e61bc94f0eb194070372" exitCode=0 Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.143505 4827 generic.go:334] "Generic (PLEG): container finished" podID="e1d72620-a941-44d0-b09a-401e1827f32f" containerID="7a281abc16213495565a99d8a9ab5e3e6460c7de2d03dcc68200d40b4559bbfa" exitCode=143 Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.143351 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77bbb7849c-cvr8p" event={"ID":"e1d72620-a941-44d0-b09a-401e1827f32f","Type":"ContainerDied","Data":"77b72291392212ba7d5663aba4e274c6c3a6c60ccc03e61bc94f0eb194070372"} Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.143569 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77bbb7849c-cvr8p" event={"ID":"e1d72620-a941-44d0-b09a-401e1827f32f","Type":"ContainerDied","Data":"7a281abc16213495565a99d8a9ab5e3e6460c7de2d03dcc68200d40b4559bbfa"} Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.143583 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77bbb7849c-cvr8p" event={"ID":"e1d72620-a941-44d0-b09a-401e1827f32f","Type":"ContainerDied","Data":"cd2bc5467f9cd672dc42657e8b655c072512ca36c2f20d8dc26896e135b5ae5c"} Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.143593 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd2bc5467f9cd672dc42657e8b655c072512ca36c2f20d8dc26896e135b5ae5c" Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.145562 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-649fbbf9d6-vkg84" event={"ID":"ac1aadec-fcb7-428e-9020-e424c393f018","Type":"ContainerStarted","Data":"5854854d1e8c87c4fe2f3f8c9f030f1a12ad4cd5a80b1289d92193c91f54d5e7"} Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.167522 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-649fbbf9d6-vkg84" podStartSLOduration=3.167505526 podStartE2EDuration="3.167505526s" podCreationTimestamp="2026-01-31 04:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:46.161626549 +0000 UTC m=+1138.848707018" watchObservedRunningTime="2026-01-31 04:05:46.167505526 +0000 UTC m=+1138.854585975" Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.191678 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-8f89dd846-fztml"] Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.191990 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-8f89dd846-fztml" podUID="235b452b-45b7-41a8-82c0-d3ebe8b4c19f" containerName="barbican-keystone-listener-log" containerID="cri-o://768d0a36af4b243fa7ff1c0f8fb930b60bb8002776d8d7705c115cd760407d8b" gracePeriod=30 Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.192340 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-8f89dd846-fztml" podUID="235b452b-45b7-41a8-82c0-d3ebe8b4c19f" containerName="barbican-keystone-listener" containerID="cri-o://7819495fe882427ed9603a9707e883cfab983be03333d364e6537d82847eb1c9" gracePeriod=30 Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.219167 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-77bbb7849c-cvr8p" Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.284480 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1d72620-a941-44d0-b09a-401e1827f32f-logs\") pod \"e1d72620-a941-44d0-b09a-401e1827f32f\" (UID: \"e1d72620-a941-44d0-b09a-401e1827f32f\") " Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.284633 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1d72620-a941-44d0-b09a-401e1827f32f-combined-ca-bundle\") pod \"e1d72620-a941-44d0-b09a-401e1827f32f\" (UID: \"e1d72620-a941-44d0-b09a-401e1827f32f\") " Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.284670 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1d72620-a941-44d0-b09a-401e1827f32f-config-data\") pod \"e1d72620-a941-44d0-b09a-401e1827f32f\" (UID: \"e1d72620-a941-44d0-b09a-401e1827f32f\") " Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.284761 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1d72620-a941-44d0-b09a-401e1827f32f-config-data-custom\") pod \"e1d72620-a941-44d0-b09a-401e1827f32f\" (UID: \"e1d72620-a941-44d0-b09a-401e1827f32f\") " Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.284831 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k7r6\" (UniqueName: \"kubernetes.io/projected/e1d72620-a941-44d0-b09a-401e1827f32f-kube-api-access-8k7r6\") pod \"e1d72620-a941-44d0-b09a-401e1827f32f\" (UID: \"e1d72620-a941-44d0-b09a-401e1827f32f\") " Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.287415 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1d72620-a941-44d0-b09a-401e1827f32f-logs" (OuterVolumeSpecName: "logs") pod "e1d72620-a941-44d0-b09a-401e1827f32f" (UID: "e1d72620-a941-44d0-b09a-401e1827f32f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.292673 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d72620-a941-44d0-b09a-401e1827f32f-kube-api-access-8k7r6" (OuterVolumeSpecName: "kube-api-access-8k7r6") pod "e1d72620-a941-44d0-b09a-401e1827f32f" (UID: "e1d72620-a941-44d0-b09a-401e1827f32f"). InnerVolumeSpecName "kube-api-access-8k7r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.293073 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d72620-a941-44d0-b09a-401e1827f32f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e1d72620-a941-44d0-b09a-401e1827f32f" (UID: "e1d72620-a941-44d0-b09a-401e1827f32f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.357071 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d72620-a941-44d0-b09a-401e1827f32f-config-data" (OuterVolumeSpecName: "config-data") pod "e1d72620-a941-44d0-b09a-401e1827f32f" (UID: "e1d72620-a941-44d0-b09a-401e1827f32f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.368269 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d72620-a941-44d0-b09a-401e1827f32f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1d72620-a941-44d0-b09a-401e1827f32f" (UID: "e1d72620-a941-44d0-b09a-401e1827f32f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.386935 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k7r6\" (UniqueName: \"kubernetes.io/projected/e1d72620-a941-44d0-b09a-401e1827f32f-kube-api-access-8k7r6\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.386970 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1d72620-a941-44d0-b09a-401e1827f32f-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.386980 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1d72620-a941-44d0-b09a-401e1827f32f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.386988 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1d72620-a941-44d0-b09a-401e1827f32f-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.386997 4827 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1d72620-a941-44d0-b09a-401e1827f32f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:46 crc kubenswrapper[4827]: I0131 04:05:46.412590 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:47 crc kubenswrapper[4827]: I0131 04:05:47.155334 4827 generic.go:334] "Generic (PLEG): container finished" podID="235b452b-45b7-41a8-82c0-d3ebe8b4c19f" containerID="7819495fe882427ed9603a9707e883cfab983be03333d364e6537d82847eb1c9" exitCode=0 Jan 31 04:05:47 crc kubenswrapper[4827]: I0131 04:05:47.155374 4827 generic.go:334] "Generic (PLEG): container finished" podID="235b452b-45b7-41a8-82c0-d3ebe8b4c19f" containerID="768d0a36af4b243fa7ff1c0f8fb930b60bb8002776d8d7705c115cd760407d8b" exitCode=143 Jan 31 04:05:47 crc kubenswrapper[4827]: I0131 04:05:47.155991 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8f89dd846-fztml" event={"ID":"235b452b-45b7-41a8-82c0-d3ebe8b4c19f","Type":"ContainerDied","Data":"7819495fe882427ed9603a9707e883cfab983be03333d364e6537d82847eb1c9"} Jan 31 04:05:47 crc kubenswrapper[4827]: I0131 04:05:47.156138 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8f89dd846-fztml" event={"ID":"235b452b-45b7-41a8-82c0-d3ebe8b4c19f","Type":"ContainerDied","Data":"768d0a36af4b243fa7ff1c0f8fb930b60bb8002776d8d7705c115cd760407d8b"} Jan 31 04:05:47 crc kubenswrapper[4827]: I0131 04:05:47.156169 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-77bbb7849c-cvr8p" Jan 31 04:05:47 crc kubenswrapper[4827]: I0131 04:05:47.185480 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-77bbb7849c-cvr8p"] Jan 31 04:05:47 crc kubenswrapper[4827]: I0131 04:05:47.192722 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-77bbb7849c-cvr8p"] Jan 31 04:05:47 crc kubenswrapper[4827]: I0131 04:05:47.371853 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:05:47 crc kubenswrapper[4827]: I0131 04:05:47.371952 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:05:47 crc kubenswrapper[4827]: I0131 04:05:47.372015 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 04:05:47 crc kubenswrapper[4827]: I0131 04:05:47.372854 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1261fe4f40f38bda861655c66e0801cf569b3be5862d7375b02489d6f6686b06"} pod="openshift-machine-config-operator/machine-config-daemon-jxh94" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:05:47 crc kubenswrapper[4827]: I0131 04:05:47.372949 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" containerID="cri-o://1261fe4f40f38bda861655c66e0801cf569b3be5862d7375b02489d6f6686b06" gracePeriod=600 Jan 31 04:05:48 crc kubenswrapper[4827]: I0131 04:05:48.150013 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d72620-a941-44d0-b09a-401e1827f32f" path="/var/lib/kubelet/pods/e1d72620-a941-44d0-b09a-401e1827f32f/volumes" Jan 31 04:05:48 crc kubenswrapper[4827]: I0131 04:05:48.170737 4827 generic.go:334] "Generic (PLEG): container finished" podID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerID="1261fe4f40f38bda861655c66e0801cf569b3be5862d7375b02489d6f6686b06" exitCode=0 Jan 31 04:05:48 crc kubenswrapper[4827]: I0131 04:05:48.170793 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerDied","Data":"1261fe4f40f38bda861655c66e0801cf569b3be5862d7375b02489d6f6686b06"} Jan 31 04:05:48 crc kubenswrapper[4827]: I0131 04:05:48.170833 4827 scope.go:117] "RemoveContainer" containerID="b3f2ce1bddb590379802c11a41342b77994eb27a657cdaa9086c8e7edd46b860" Jan 31 04:05:48 crc kubenswrapper[4827]: I0131 04:05:48.740931 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6c6f779bd6-5qmvs" podUID="20064771-e5a8-4227-b10d-39905587be45" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": read tcp 10.217.0.2:55756->10.217.0.147:9311: read: connection reset by peer" Jan 31 04:05:48 crc kubenswrapper[4827]: I0131 04:05:48.740946 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6c6f779bd6-5qmvs" podUID="20064771-e5a8-4227-b10d-39905587be45" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": read tcp 10.217.0.2:55758->10.217.0.147:9311: read: connection reset by peer" Jan 31 04:05:49 crc kubenswrapper[4827]: I0131 04:05:49.181341 4827 generic.go:334] "Generic (PLEG): container finished" podID="20064771-e5a8-4227-b10d-39905587be45" containerID="e65bd6854900acdd583a2aab128a8852e0dc5cfdb421ba57b3c6c93d5b027699" exitCode=0 Jan 31 04:05:49 crc kubenswrapper[4827]: I0131 04:05:49.181414 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c6f779bd6-5qmvs" event={"ID":"20064771-e5a8-4227-b10d-39905587be45","Type":"ContainerDied","Data":"e65bd6854900acdd583a2aab128a8852e0dc5cfdb421ba57b3c6c93d5b027699"} Jan 31 04:05:50 crc kubenswrapper[4827]: I0131 04:05:50.145105 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" Jan 31 04:05:50 crc kubenswrapper[4827]: I0131 04:05:50.216196 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:50 crc kubenswrapper[4827]: I0131 04:05:50.232524 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-65cfr"] Jan 31 04:05:50 crc kubenswrapper[4827]: I0131 04:05:50.232761 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" podUID="74d7118a-77ce-4f65-b0c3-a28c70623d2d" containerName="dnsmasq-dns" containerID="cri-o://a9055da5d3c5e8ea71781f6a31c48b3595dd4f88675097e31a1a7067a4751cac" gracePeriod=10 Jan 31 04:05:50 crc kubenswrapper[4827]: I0131 04:05:50.249136 4827 generic.go:334] "Generic (PLEG): container finished" podID="3da5eeb9-641c-4b43-a3c9-eb4860e9995b" containerID="06beff70b7f8432840557efe615f1d2992319d6afd7b0185b8a5b8fefc27ba3e" exitCode=0 Jan 31 04:05:50 crc kubenswrapper[4827]: I0131 04:05:50.249184 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-48bfh" event={"ID":"3da5eeb9-641c-4b43-a3c9-eb4860e9995b","Type":"ContainerDied","Data":"06beff70b7f8432840557efe615f1d2992319d6afd7b0185b8a5b8fefc27ba3e"} Jan 31 04:05:50 crc kubenswrapper[4827]: I0131 04:05:50.334331 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6c6f779bd6-5qmvs" podUID="20064771-e5a8-4227-b10d-39905587be45" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": dial tcp 10.217.0.147:9311: connect: connection refused" Jan 31 04:05:50 crc kubenswrapper[4827]: I0131 04:05:50.334451 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6c6f779bd6-5qmvs" podUID="20064771-e5a8-4227-b10d-39905587be45" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": dial tcp 10.217.0.147:9311: connect: connection refused" Jan 31 04:05:51 crc kubenswrapper[4827]: I0131 04:05:51.258205 4827 generic.go:334] "Generic (PLEG): container finished" podID="74d7118a-77ce-4f65-b0c3-a28c70623d2d" containerID="a9055da5d3c5e8ea71781f6a31c48b3595dd4f88675097e31a1a7067a4751cac" exitCode=0 Jan 31 04:05:51 crc kubenswrapper[4827]: I0131 04:05:51.258278 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" event={"ID":"74d7118a-77ce-4f65-b0c3-a28c70623d2d","Type":"ContainerDied","Data":"a9055da5d3c5e8ea71781f6a31c48b3595dd4f88675097e31a1a7067a4751cac"} Jan 31 04:05:51 crc kubenswrapper[4827]: I0131 04:05:51.544900 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" podUID="74d7118a-77ce-4f65-b0c3-a28c70623d2d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: connect: connection refused" Jan 31 04:05:51 crc kubenswrapper[4827]: I0131 04:05:51.776024 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:51 crc kubenswrapper[4827]: I0131 04:05:51.850024 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8f89dd846-fztml" Jan 31 04:05:51 crc kubenswrapper[4827]: I0131 04:05:51.903546 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-config-data\") pod \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " Jan 31 04:05:51 crc kubenswrapper[4827]: I0131 04:05:51.903588 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-db-sync-config-data\") pod \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " Jan 31 04:05:51 crc kubenswrapper[4827]: I0131 04:05:51.903699 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-combined-ca-bundle\") pod \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " Jan 31 04:05:51 crc kubenswrapper[4827]: I0131 04:05:51.904138 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-etc-machine-id\") pod \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " Jan 31 04:05:51 crc kubenswrapper[4827]: I0131 04:05:51.904216 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-scripts\") pod \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " Jan 31 04:05:51 crc kubenswrapper[4827]: I0131 04:05:51.904244 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv7s2\" (UniqueName: \"kubernetes.io/projected/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-kube-api-access-fv7s2\") pod \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\" (UID: \"3da5eeb9-641c-4b43-a3c9-eb4860e9995b\") " Jan 31 04:05:51 crc kubenswrapper[4827]: I0131 04:05:51.904339 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3da5eeb9-641c-4b43-a3c9-eb4860e9995b" (UID: "3da5eeb9-641c-4b43-a3c9-eb4860e9995b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:05:51 crc kubenswrapper[4827]: I0131 04:05:51.907520 4827 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:51 crc kubenswrapper[4827]: I0131 04:05:51.914113 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-kube-api-access-fv7s2" (OuterVolumeSpecName: "kube-api-access-fv7s2") pod "3da5eeb9-641c-4b43-a3c9-eb4860e9995b" (UID: "3da5eeb9-641c-4b43-a3c9-eb4860e9995b"). InnerVolumeSpecName "kube-api-access-fv7s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:51 crc kubenswrapper[4827]: I0131 04:05:51.916886 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-scripts" (OuterVolumeSpecName: "scripts") pod "3da5eeb9-641c-4b43-a3c9-eb4860e9995b" (UID: "3da5eeb9-641c-4b43-a3c9-eb4860e9995b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:51 crc kubenswrapper[4827]: I0131 04:05:51.927219 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3da5eeb9-641c-4b43-a3c9-eb4860e9995b" (UID: "3da5eeb9-641c-4b43-a3c9-eb4860e9995b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:51 crc kubenswrapper[4827]: I0131 04:05:51.955512 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c6f779bd6-5qmvs" Jan 31 04:05:51 crc kubenswrapper[4827]: I0131 04:05:51.979671 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3da5eeb9-641c-4b43-a3c9-eb4860e9995b" (UID: "3da5eeb9-641c-4b43-a3c9-eb4860e9995b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:51 crc kubenswrapper[4827]: I0131 04:05:51.983238 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-config-data" (OuterVolumeSpecName: "config-data") pod "3da5eeb9-641c-4b43-a3c9-eb4860e9995b" (UID: "3da5eeb9-641c-4b43-a3c9-eb4860e9995b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.008913 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-config-data-custom\") pod \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\" (UID: \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\") " Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.008983 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-combined-ca-bundle\") pod \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\" (UID: \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\") " Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.009039 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-logs\") pod \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\" (UID: \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\") " Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.009137 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-config-data\") pod \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\" (UID: \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\") " Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.009227 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zklz\" (UniqueName: \"kubernetes.io/projected/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-kube-api-access-5zklz\") pod \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\" (UID: \"235b452b-45b7-41a8-82c0-d3ebe8b4c19f\") " Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.009663 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.009687 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv7s2\" (UniqueName: \"kubernetes.io/projected/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-kube-api-access-fv7s2\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.009703 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.009715 4827 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.009725 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da5eeb9-641c-4b43-a3c9-eb4860e9995b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.010882 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-logs" (OuterVolumeSpecName: "logs") pod "235b452b-45b7-41a8-82c0-d3ebe8b4c19f" (UID: "235b452b-45b7-41a8-82c0-d3ebe8b4c19f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.013615 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "235b452b-45b7-41a8-82c0-d3ebe8b4c19f" (UID: "235b452b-45b7-41a8-82c0-d3ebe8b4c19f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.014273 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-kube-api-access-5zklz" (OuterVolumeSpecName: "kube-api-access-5zklz") pod "235b452b-45b7-41a8-82c0-d3ebe8b4c19f" (UID: "235b452b-45b7-41a8-82c0-d3ebe8b4c19f"). InnerVolumeSpecName "kube-api-access-5zklz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.024156 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.036793 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "235b452b-45b7-41a8-82c0-d3ebe8b4c19f" (UID: "235b452b-45b7-41a8-82c0-d3ebe8b4c19f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.068115 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-config-data" (OuterVolumeSpecName: "config-data") pod "235b452b-45b7-41a8-82c0-d3ebe8b4c19f" (UID: "235b452b-45b7-41a8-82c0-d3ebe8b4c19f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.110311 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20064771-e5a8-4227-b10d-39905587be45-config-data-custom\") pod \"20064771-e5a8-4227-b10d-39905587be45\" (UID: \"20064771-e5a8-4227-b10d-39905587be45\") " Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.110629 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20064771-e5a8-4227-b10d-39905587be45-combined-ca-bundle\") pod \"20064771-e5a8-4227-b10d-39905587be45\" (UID: \"20064771-e5a8-4227-b10d-39905587be45\") " Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.110775 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20064771-e5a8-4227-b10d-39905587be45-config-data\") pod \"20064771-e5a8-4227-b10d-39905587be45\" (UID: \"20064771-e5a8-4227-b10d-39905587be45\") " Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.111212 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb4hc\" (UniqueName: \"kubernetes.io/projected/20064771-e5a8-4227-b10d-39905587be45-kube-api-access-pb4hc\") pod \"20064771-e5a8-4227-b10d-39905587be45\" (UID: \"20064771-e5a8-4227-b10d-39905587be45\") " Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.111395 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20064771-e5a8-4227-b10d-39905587be45-logs\") pod \"20064771-e5a8-4227-b10d-39905587be45\" (UID: \"20064771-e5a8-4227-b10d-39905587be45\") " Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.112007 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.112182 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zklz\" (UniqueName: \"kubernetes.io/projected/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-kube-api-access-5zklz\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.112362 4827 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.112511 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.112617 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/235b452b-45b7-41a8-82c0-d3ebe8b4c19f-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.112926 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20064771-e5a8-4227-b10d-39905587be45-logs" (OuterVolumeSpecName: "logs") pod "20064771-e5a8-4227-b10d-39905587be45" (UID: "20064771-e5a8-4227-b10d-39905587be45"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.113792 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20064771-e5a8-4227-b10d-39905587be45-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "20064771-e5a8-4227-b10d-39905587be45" (UID: "20064771-e5a8-4227-b10d-39905587be45"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.114928 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20064771-e5a8-4227-b10d-39905587be45-kube-api-access-pb4hc" (OuterVolumeSpecName: "kube-api-access-pb4hc") pod "20064771-e5a8-4227-b10d-39905587be45" (UID: "20064771-e5a8-4227-b10d-39905587be45"). InnerVolumeSpecName "kube-api-access-pb4hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.138032 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20064771-e5a8-4227-b10d-39905587be45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20064771-e5a8-4227-b10d-39905587be45" (UID: "20064771-e5a8-4227-b10d-39905587be45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.171283 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20064771-e5a8-4227-b10d-39905587be45-config-data" (OuterVolumeSpecName: "config-data") pod "20064771-e5a8-4227-b10d-39905587be45" (UID: "20064771-e5a8-4227-b10d-39905587be45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.213661 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-ovsdbserver-sb\") pod \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\" (UID: \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\") " Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.213794 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-config\") pod \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\" (UID: \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\") " Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.213843 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-dns-svc\") pod \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\" (UID: \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\") " Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.213937 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-ovsdbserver-nb\") pod \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\" (UID: \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\") " Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.213996 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpp94\" (UniqueName: \"kubernetes.io/projected/74d7118a-77ce-4f65-b0c3-a28c70623d2d-kube-api-access-lpp94\") pod \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\" (UID: \"74d7118a-77ce-4f65-b0c3-a28c70623d2d\") " Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.214483 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb4hc\" (UniqueName: \"kubernetes.io/projected/20064771-e5a8-4227-b10d-39905587be45-kube-api-access-pb4hc\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.214508 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20064771-e5a8-4227-b10d-39905587be45-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.214522 4827 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20064771-e5a8-4227-b10d-39905587be45-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.214535 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20064771-e5a8-4227-b10d-39905587be45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.214546 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20064771-e5a8-4227-b10d-39905587be45-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.224559 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74d7118a-77ce-4f65-b0c3-a28c70623d2d-kube-api-access-lpp94" (OuterVolumeSpecName: "kube-api-access-lpp94") pod "74d7118a-77ce-4f65-b0c3-a28c70623d2d" (UID: "74d7118a-77ce-4f65-b0c3-a28c70623d2d"). InnerVolumeSpecName "kube-api-access-lpp94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.258632 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "74d7118a-77ce-4f65-b0c3-a28c70623d2d" (UID: "74d7118a-77ce-4f65-b0c3-a28c70623d2d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.261077 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "74d7118a-77ce-4f65-b0c3-a28c70623d2d" (UID: "74d7118a-77ce-4f65-b0c3-a28c70623d2d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.262659 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "74d7118a-77ce-4f65-b0c3-a28c70623d2d" (UID: "74d7118a-77ce-4f65-b0c3-a28c70623d2d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.262841 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-config" (OuterVolumeSpecName: "config") pod "74d7118a-77ce-4f65-b0c3-a28c70623d2d" (UID: "74d7118a-77ce-4f65-b0c3-a28c70623d2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.268311 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c6f779bd6-5qmvs" event={"ID":"20064771-e5a8-4227-b10d-39905587be45","Type":"ContainerDied","Data":"ed6220e1c4de9ed4dfceb02faa00b9719d68be2d92b8f1815e77dcbd0dbd7c64"} Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.268370 4827 scope.go:117] "RemoveContainer" containerID="e65bd6854900acdd583a2aab128a8852e0dc5cfdb421ba57b3c6c93d5b027699" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.268475 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c6f779bd6-5qmvs" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.275787 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-48bfh" event={"ID":"3da5eeb9-641c-4b43-a3c9-eb4860e9995b","Type":"ContainerDied","Data":"105cd57e15ec62017be0934c9f9f65972db769886e58c58798d5380b0e092bac"} Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.275850 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="105cd57e15ec62017be0934c9f9f65972db769886e58c58798d5380b0e092bac" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.275993 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-48bfh" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.278292 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8f89dd846-fztml" event={"ID":"235b452b-45b7-41a8-82c0-d3ebe8b4c19f","Type":"ContainerDied","Data":"08abbe263e83e1ad67b6e432f45476ce299ec7da3854db53a09b115d5ae02250"} Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.278353 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8f89dd846-fztml" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.280114 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.280110 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-65cfr" event={"ID":"74d7118a-77ce-4f65-b0c3-a28c70623d2d","Type":"ContainerDied","Data":"e715ce476b90f5e5b4fe718f38756dccd4f8dbe738055092b4b370d17863b22a"} Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.308017 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d07cd025-0fc1-4536-978d-f7f7b0df2dbc" containerName="ceilometer-central-agent" containerID="cri-o://b988a2b47d57cf27a43165062e4e9bbf55bf7fd3c3576b318ac3691a779daa83" gracePeriod=30 Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.310274 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d07cd025-0fc1-4536-978d-f7f7b0df2dbc","Type":"ContainerStarted","Data":"aa7e117a3607dcf4797ded76871902888ee6defaf76c278d80c76286f73764e4"} Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.310339 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-8f89dd846-fztml"] Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.311344 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.311416 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d07cd025-0fc1-4536-978d-f7f7b0df2dbc" containerName="proxy-httpd" containerID="cri-o://aa7e117a3607dcf4797ded76871902888ee6defaf76c278d80c76286f73764e4" gracePeriod=30 Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.311499 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d07cd025-0fc1-4536-978d-f7f7b0df2dbc" containerName="sg-core" containerID="cri-o://ead723a81fa378183110c6664cd4f76e0c922cdbb27cbfe04a57730874e602b2" gracePeriod=30 Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.311540 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d07cd025-0fc1-4536-978d-f7f7b0df2dbc" containerName="ceilometer-notification-agent" containerID="cri-o://047e9b3a0c8b60a604bcaa97a3f1ceb16772069ff3935941f313f5429b3499bb" gracePeriod=30 Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.314230 4827 scope.go:117] "RemoveContainer" containerID="13223d17e3281ee66fede020e4b7ee8d2bdb776a49450bce73de431993a8b47e" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.318021 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-8f89dd846-fztml"] Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.318735 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.318764 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.318775 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.318786 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74d7118a-77ce-4f65-b0c3-a28c70623d2d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.318797 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpp94\" (UniqueName: \"kubernetes.io/projected/74d7118a-77ce-4f65-b0c3-a28c70623d2d-kube-api-access-lpp94\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.327674 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"047ff0edcff47ab439ecf6139d8ba1839619a9b3e0c1bd807d83661af77614a9"} Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.332695 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6c6f779bd6-5qmvs"] Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.338093 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6c6f779bd6-5qmvs"] Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.346773 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-65cfr"] Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.352278 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-65cfr"] Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.361886 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.152233397 podStartE2EDuration="46.361869574s" podCreationTimestamp="2026-01-31 04:05:06 +0000 UTC" firstStartedPulling="2026-01-31 04:05:07.442787059 +0000 UTC m=+1100.129867508" lastFinishedPulling="2026-01-31 04:05:51.652423236 +0000 UTC m=+1144.339503685" observedRunningTime="2026-01-31 04:05:52.357630103 +0000 UTC m=+1145.044710552" watchObservedRunningTime="2026-01-31 04:05:52.361869574 +0000 UTC m=+1145.048950023" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.364052 4827 scope.go:117] "RemoveContainer" containerID="7819495fe882427ed9603a9707e883cfab983be03333d364e6537d82847eb1c9" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.386030 4827 scope.go:117] "RemoveContainer" containerID="768d0a36af4b243fa7ff1c0f8fb930b60bb8002776d8d7705c115cd760407d8b" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.408681 4827 scope.go:117] "RemoveContainer" containerID="a9055da5d3c5e8ea71781f6a31c48b3595dd4f88675097e31a1a7067a4751cac" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.430619 4827 scope.go:117] "RemoveContainer" containerID="fb804edbf75f7ddc3b818870c1b27232c32bff224ffdb18124af18328e0bc8a6" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.601861 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 04:05:52 crc kubenswrapper[4827]: E0131 04:05:52.602584 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d72620-a941-44d0-b09a-401e1827f32f" containerName="barbican-worker-log" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.602597 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d72620-a941-44d0-b09a-401e1827f32f" containerName="barbican-worker-log" Jan 31 04:05:52 crc kubenswrapper[4827]: E0131 04:05:52.602610 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="235b452b-45b7-41a8-82c0-d3ebe8b4c19f" containerName="barbican-keystone-listener" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.602616 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="235b452b-45b7-41a8-82c0-d3ebe8b4c19f" containerName="barbican-keystone-listener" Jan 31 04:05:52 crc kubenswrapper[4827]: E0131 04:05:52.602640 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d72620-a941-44d0-b09a-401e1827f32f" containerName="barbican-worker" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.602646 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d72620-a941-44d0-b09a-401e1827f32f" containerName="barbican-worker" Jan 31 04:05:52 crc kubenswrapper[4827]: E0131 04:05:52.602658 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d7118a-77ce-4f65-b0c3-a28c70623d2d" containerName="init" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.602664 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d7118a-77ce-4f65-b0c3-a28c70623d2d" containerName="init" Jan 31 04:05:52 crc kubenswrapper[4827]: E0131 04:05:52.602676 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20064771-e5a8-4227-b10d-39905587be45" containerName="barbican-api-log" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.602682 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="20064771-e5a8-4227-b10d-39905587be45" containerName="barbican-api-log" Jan 31 04:05:52 crc kubenswrapper[4827]: E0131 04:05:52.602696 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da5eeb9-641c-4b43-a3c9-eb4860e9995b" containerName="cinder-db-sync" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.602701 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da5eeb9-641c-4b43-a3c9-eb4860e9995b" containerName="cinder-db-sync" Jan 31 04:05:52 crc kubenswrapper[4827]: E0131 04:05:52.602712 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d7118a-77ce-4f65-b0c3-a28c70623d2d" containerName="dnsmasq-dns" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.602718 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d7118a-77ce-4f65-b0c3-a28c70623d2d" containerName="dnsmasq-dns" Jan 31 04:05:52 crc kubenswrapper[4827]: E0131 04:05:52.602731 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="235b452b-45b7-41a8-82c0-d3ebe8b4c19f" containerName="barbican-keystone-listener-log" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.602739 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="235b452b-45b7-41a8-82c0-d3ebe8b4c19f" containerName="barbican-keystone-listener-log" Jan 31 04:05:52 crc kubenswrapper[4827]: E0131 04:05:52.602754 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20064771-e5a8-4227-b10d-39905587be45" containerName="barbican-api" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.602762 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="20064771-e5a8-4227-b10d-39905587be45" containerName="barbican-api" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.602941 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d7118a-77ce-4f65-b0c3-a28c70623d2d" containerName="dnsmasq-dns" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.602950 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="20064771-e5a8-4227-b10d-39905587be45" containerName="barbican-api-log" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.602959 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1d72620-a941-44d0-b09a-401e1827f32f" containerName="barbican-worker" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.602970 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da5eeb9-641c-4b43-a3c9-eb4860e9995b" containerName="cinder-db-sync" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.602976 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="20064771-e5a8-4227-b10d-39905587be45" containerName="barbican-api" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.602984 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="235b452b-45b7-41a8-82c0-d3ebe8b4c19f" containerName="barbican-keystone-listener-log" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.602992 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1d72620-a941-44d0-b09a-401e1827f32f" containerName="barbican-worker-log" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.603002 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="235b452b-45b7-41a8-82c0-d3ebe8b4c19f" containerName="barbican-keystone-listener" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.603870 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.609082 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-j7tvl" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.609403 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.609685 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.609870 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.619357 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.626309 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " pod="openstack/cinder-scheduler-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.626396 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-config-data\") pod \"cinder-scheduler-0\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " pod="openstack/cinder-scheduler-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.626460 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " pod="openstack/cinder-scheduler-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.626484 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq7nq\" (UniqueName: \"kubernetes.io/projected/cd3c35ef-997d-4511-9dec-08ee13ff1591-kube-api-access-pq7nq\") pod \"cinder-scheduler-0\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " pod="openstack/cinder-scheduler-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.626529 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd3c35ef-997d-4511-9dec-08ee13ff1591-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " pod="openstack/cinder-scheduler-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.626575 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-scripts\") pod \"cinder-scheduler-0\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " pod="openstack/cinder-scheduler-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.656853 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-ht8c2"] Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.658609 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.673189 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-ht8c2"] Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.728325 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " pod="openstack/cinder-scheduler-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.728380 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq7nq\" (UniqueName: \"kubernetes.io/projected/cd3c35ef-997d-4511-9dec-08ee13ff1591-kube-api-access-pq7nq\") pod \"cinder-scheduler-0\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " pod="openstack/cinder-scheduler-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.728428 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd3c35ef-997d-4511-9dec-08ee13ff1591-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " pod="openstack/cinder-scheduler-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.728474 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-scripts\") pod \"cinder-scheduler-0\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " pod="openstack/cinder-scheduler-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.728511 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " pod="openstack/cinder-scheduler-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.728552 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-config-data\") pod \"cinder-scheduler-0\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " pod="openstack/cinder-scheduler-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.741053 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd3c35ef-997d-4511-9dec-08ee13ff1591-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " pod="openstack/cinder-scheduler-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.741496 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " pod="openstack/cinder-scheduler-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.742347 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-config-data\") pod \"cinder-scheduler-0\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " pod="openstack/cinder-scheduler-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.745444 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-scripts\") pod \"cinder-scheduler-0\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " pod="openstack/cinder-scheduler-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.759492 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " pod="openstack/cinder-scheduler-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.763423 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq7nq\" (UniqueName: \"kubernetes.io/projected/cd3c35ef-997d-4511-9dec-08ee13ff1591-kube-api-access-pq7nq\") pod \"cinder-scheduler-0\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " pod="openstack/cinder-scheduler-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.803422 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.806085 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.815686 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.837893 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdsqp\" (UniqueName: \"kubernetes.io/projected/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-kube-api-access-pdsqp\") pod \"dnsmasq-dns-6d97fcdd8f-ht8c2\" (UID: \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.837966 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-ht8c2\" (UID: \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.838019 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-ht8c2\" (UID: \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.838060 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-ht8c2\" (UID: \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.838136 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-config\") pod \"dnsmasq-dns-6d97fcdd8f-ht8c2\" (UID: \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.860981 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.939800 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-ht8c2\" (UID: \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.939862 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-config-data-custom\") pod \"cinder-api-0\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " pod="openstack/cinder-api-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.939967 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-scripts\") pod \"cinder-api-0\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " pod="openstack/cinder-api-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.940002 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " pod="openstack/cinder-api-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.940022 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddqz2\" (UniqueName: \"kubernetes.io/projected/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-kube-api-access-ddqz2\") pod \"cinder-api-0\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " pod="openstack/cinder-api-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.940058 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-config\") pod \"dnsmasq-dns-6d97fcdd8f-ht8c2\" (UID: \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.940076 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdsqp\" (UniqueName: \"kubernetes.io/projected/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-kube-api-access-pdsqp\") pod \"dnsmasq-dns-6d97fcdd8f-ht8c2\" (UID: \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.940097 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " pod="openstack/cinder-api-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.940131 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-ht8c2\" (UID: \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.940151 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-config-data\") pod \"cinder-api-0\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " pod="openstack/cinder-api-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.940188 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-logs\") pod \"cinder-api-0\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " pod="openstack/cinder-api-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.940207 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-ht8c2\" (UID: \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.941129 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-ht8c2\" (UID: \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.941216 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-config\") pod \"dnsmasq-dns-6d97fcdd8f-ht8c2\" (UID: \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.941838 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-ht8c2\" (UID: \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.942062 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-ht8c2\" (UID: \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.943537 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.965338 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdsqp\" (UniqueName: \"kubernetes.io/projected/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-kube-api-access-pdsqp\") pod \"dnsmasq-dns-6d97fcdd8f-ht8c2\" (UID: \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" Jan 31 04:05:52 crc kubenswrapper[4827]: I0131 04:05:52.995331 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.041806 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " pod="openstack/cinder-api-0" Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.042088 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-config-data\") pod \"cinder-api-0\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " pod="openstack/cinder-api-0" Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.042131 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-logs\") pod \"cinder-api-0\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " pod="openstack/cinder-api-0" Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.042180 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-config-data-custom\") pod \"cinder-api-0\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " pod="openstack/cinder-api-0" Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.042197 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " pod="openstack/cinder-api-0" Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.042205 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-scripts\") pod \"cinder-api-0\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " pod="openstack/cinder-api-0" Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.042337 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " pod="openstack/cinder-api-0" Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.042371 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddqz2\" (UniqueName: \"kubernetes.io/projected/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-kube-api-access-ddqz2\") pod \"cinder-api-0\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " pod="openstack/cinder-api-0" Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.043266 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-logs\") pod \"cinder-api-0\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " pod="openstack/cinder-api-0" Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.047391 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-scripts\") pod \"cinder-api-0\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " pod="openstack/cinder-api-0" Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.049737 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-config-data\") pod \"cinder-api-0\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " pod="openstack/cinder-api-0" Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.053607 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-config-data-custom\") pod \"cinder-api-0\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " pod="openstack/cinder-api-0" Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.056508 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " pod="openstack/cinder-api-0" Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.072381 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddqz2\" (UniqueName: \"kubernetes.io/projected/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-kube-api-access-ddqz2\") pod \"cinder-api-0\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " pod="openstack/cinder-api-0" Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.181276 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.339378 4827 generic.go:334] "Generic (PLEG): container finished" podID="d07cd025-0fc1-4536-978d-f7f7b0df2dbc" containerID="aa7e117a3607dcf4797ded76871902888ee6defaf76c278d80c76286f73764e4" exitCode=0 Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.339423 4827 generic.go:334] "Generic (PLEG): container finished" podID="d07cd025-0fc1-4536-978d-f7f7b0df2dbc" containerID="ead723a81fa378183110c6664cd4f76e0c922cdbb27cbfe04a57730874e602b2" exitCode=2 Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.339433 4827 generic.go:334] "Generic (PLEG): container finished" podID="d07cd025-0fc1-4536-978d-f7f7b0df2dbc" containerID="b988a2b47d57cf27a43165062e4e9bbf55bf7fd3c3576b318ac3691a779daa83" exitCode=0 Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.339451 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d07cd025-0fc1-4536-978d-f7f7b0df2dbc","Type":"ContainerDied","Data":"aa7e117a3607dcf4797ded76871902888ee6defaf76c278d80c76286f73764e4"} Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.339505 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d07cd025-0fc1-4536-978d-f7f7b0df2dbc","Type":"ContainerDied","Data":"ead723a81fa378183110c6664cd4f76e0c922cdbb27cbfe04a57730874e602b2"} Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.339516 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d07cd025-0fc1-4536-978d-f7f7b0df2dbc","Type":"ContainerDied","Data":"b988a2b47d57cf27a43165062e4e9bbf55bf7fd3c3576b318ac3691a779daa83"} Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.440014 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 04:05:53 crc kubenswrapper[4827]: W0131 04:05:53.533508 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f38aa0f_0fc6_4caf_a17c_d4d1f034e6f8.slice/crio-3460906c008a42e6c4478cdc19bdcf87ac27655df1d9455ff2dfbda649a13e1a WatchSource:0}: Error finding container 3460906c008a42e6c4478cdc19bdcf87ac27655df1d9455ff2dfbda649a13e1a: Status 404 returned error can't find the container with id 3460906c008a42e6c4478cdc19bdcf87ac27655df1d9455ff2dfbda649a13e1a Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.534478 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-ht8c2"] Jan 31 04:05:53 crc kubenswrapper[4827]: I0131 04:05:53.629055 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 04:05:54 crc kubenswrapper[4827]: I0131 04:05:54.142651 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20064771-e5a8-4227-b10d-39905587be45" path="/var/lib/kubelet/pods/20064771-e5a8-4227-b10d-39905587be45/volumes" Jan 31 04:05:54 crc kubenswrapper[4827]: I0131 04:05:54.143480 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="235b452b-45b7-41a8-82c0-d3ebe8b4c19f" path="/var/lib/kubelet/pods/235b452b-45b7-41a8-82c0-d3ebe8b4c19f/volumes" Jan 31 04:05:54 crc kubenswrapper[4827]: I0131 04:05:54.144034 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74d7118a-77ce-4f65-b0c3-a28c70623d2d" path="/var/lib/kubelet/pods/74d7118a-77ce-4f65-b0c3-a28c70623d2d/volumes" Jan 31 04:05:54 crc kubenswrapper[4827]: I0131 04:05:54.381782 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cd3c35ef-997d-4511-9dec-08ee13ff1591","Type":"ContainerStarted","Data":"18889c5e63fb8babcd37fad0ab7549c303765fad7c311714c14549be9601159c"} Jan 31 04:05:54 crc kubenswrapper[4827]: I0131 04:05:54.389646 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e9e0dca6-8ec6-4124-82e4-69eaac1da0af","Type":"ContainerStarted","Data":"d55f1f29ab725ea1d019d73110a36e4bec01bdaa2c3287a3fe00b0f1be517751"} Jan 31 04:05:54 crc kubenswrapper[4827]: I0131 04:05:54.392019 4827 generic.go:334] "Generic (PLEG): container finished" podID="5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8" containerID="3dd58151dfee712c62f3ca80baa25f2d49f0a28073b8c21a23d5ff2e895c2244" exitCode=0 Jan 31 04:05:54 crc kubenswrapper[4827]: I0131 04:05:54.393666 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" event={"ID":"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8","Type":"ContainerDied","Data":"3dd58151dfee712c62f3ca80baa25f2d49f0a28073b8c21a23d5ff2e895c2244"} Jan 31 04:05:54 crc kubenswrapper[4827]: I0131 04:05:54.393697 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" event={"ID":"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8","Type":"ContainerStarted","Data":"3460906c008a42e6c4478cdc19bdcf87ac27655df1d9455ff2dfbda649a13e1a"} Jan 31 04:05:54 crc kubenswrapper[4827]: I0131 04:05:54.958580 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 31 04:05:55 crc kubenswrapper[4827]: I0131 04:05:55.213630 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:55 crc kubenswrapper[4827]: I0131 04:05:55.369387 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74bf46887d-nb5df" Jan 31 04:05:55 crc kubenswrapper[4827]: I0131 04:05:55.406564 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e9e0dca6-8ec6-4124-82e4-69eaac1da0af","Type":"ContainerStarted","Data":"b423735e4c610cc32e9de46840e30dd87ff2b9bda0c3f3a6f66bc3c347c79d8a"} Jan 31 04:05:55 crc kubenswrapper[4827]: I0131 04:05:55.406945 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e9e0dca6-8ec6-4124-82e4-69eaac1da0af","Type":"ContainerStarted","Data":"97995bd0ff94c6e7db6f3bdbafdf9a64eda66d05a75b00ce18a7c338d12b95c5"} Jan 31 04:05:55 crc kubenswrapper[4827]: I0131 04:05:55.406609 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e9e0dca6-8ec6-4124-82e4-69eaac1da0af" containerName="cinder-api-log" containerID="cri-o://97995bd0ff94c6e7db6f3bdbafdf9a64eda66d05a75b00ce18a7c338d12b95c5" gracePeriod=30 Jan 31 04:05:55 crc kubenswrapper[4827]: I0131 04:05:55.406964 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 31 04:05:55 crc kubenswrapper[4827]: I0131 04:05:55.406654 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e9e0dca6-8ec6-4124-82e4-69eaac1da0af" containerName="cinder-api" containerID="cri-o://b423735e4c610cc32e9de46840e30dd87ff2b9bda0c3f3a6f66bc3c347c79d8a" gracePeriod=30 Jan 31 04:05:55 crc kubenswrapper[4827]: I0131 04:05:55.429785 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" event={"ID":"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8","Type":"ContainerStarted","Data":"06d7363ec90e1783a9f34fabc72fa9b099a6766da31bf2cd8daf11fcfbb67584"} Jan 31 04:05:55 crc kubenswrapper[4827]: I0131 04:05:55.430907 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" Jan 31 04:05:55 crc kubenswrapper[4827]: I0131 04:05:55.436433 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-87c57bc7d-fgwdw"] Jan 31 04:05:55 crc kubenswrapper[4827]: I0131 04:05:55.436674 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-87c57bc7d-fgwdw" podUID="2379f58c-95e4-4242-93de-82813ecbf089" containerName="barbican-api-log" containerID="cri-o://6f0323ef39c21137a91ef37634c50b08484a3caca7ece0deb69af6fb171b8223" gracePeriod=30 Jan 31 04:05:55 crc kubenswrapper[4827]: I0131 04:05:55.436761 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-87c57bc7d-fgwdw" podUID="2379f58c-95e4-4242-93de-82813ecbf089" containerName="barbican-api" containerID="cri-o://02091ba400a30568eb018890f1d435f65daf0a762f5dd961b208271015015ce4" gracePeriod=30 Jan 31 04:05:55 crc kubenswrapper[4827]: I0131 04:05:55.443012 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.442987406 podStartE2EDuration="3.442987406s" podCreationTimestamp="2026-01-31 04:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:55.429256656 +0000 UTC m=+1148.116337105" watchObservedRunningTime="2026-01-31 04:05:55.442987406 +0000 UTC m=+1148.130067855" Jan 31 04:05:55 crc kubenswrapper[4827]: I0131 04:05:55.444052 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cd3c35ef-997d-4511-9dec-08ee13ff1591","Type":"ContainerStarted","Data":"57b0296fecd24f3d20e970e14b2a5214000c1a1e001276e80a96b95ed8edf890"} Jan 31 04:05:55 crc kubenswrapper[4827]: I0131 04:05:55.479265 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" podStartSLOduration=3.479245996 podStartE2EDuration="3.479245996s" podCreationTimestamp="2026-01-31 04:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:55.473325617 +0000 UTC m=+1148.160406076" watchObservedRunningTime="2026-01-31 04:05:55.479245996 +0000 UTC m=+1148.166326445" Jan 31 04:05:56 crc kubenswrapper[4827]: I0131 04:05:56.454629 4827 generic.go:334] "Generic (PLEG): container finished" podID="2379f58c-95e4-4242-93de-82813ecbf089" containerID="6f0323ef39c21137a91ef37634c50b08484a3caca7ece0deb69af6fb171b8223" exitCode=143 Jan 31 04:05:56 crc kubenswrapper[4827]: I0131 04:05:56.454922 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-87c57bc7d-fgwdw" event={"ID":"2379f58c-95e4-4242-93de-82813ecbf089","Type":"ContainerDied","Data":"6f0323ef39c21137a91ef37634c50b08484a3caca7ece0deb69af6fb171b8223"} Jan 31 04:05:56 crc kubenswrapper[4827]: I0131 04:05:56.457370 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cd3c35ef-997d-4511-9dec-08ee13ff1591","Type":"ContainerStarted","Data":"ce20c721c959c5920bae0b4b2db8e3297bad03b6bcf5b471f1c59a0c70840d79"} Jan 31 04:05:56 crc kubenswrapper[4827]: I0131 04:05:56.459965 4827 generic.go:334] "Generic (PLEG): container finished" podID="e9e0dca6-8ec6-4124-82e4-69eaac1da0af" containerID="97995bd0ff94c6e7db6f3bdbafdf9a64eda66d05a75b00ce18a7c338d12b95c5" exitCode=143 Jan 31 04:05:56 crc kubenswrapper[4827]: I0131 04:05:56.460062 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e9e0dca6-8ec6-4124-82e4-69eaac1da0af","Type":"ContainerDied","Data":"97995bd0ff94c6e7db6f3bdbafdf9a64eda66d05a75b00ce18a7c338d12b95c5"} Jan 31 04:05:56 crc kubenswrapper[4827]: I0131 04:05:56.486821 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.8735749889999997 podStartE2EDuration="4.486804592s" podCreationTimestamp="2026-01-31 04:05:52 +0000 UTC" firstStartedPulling="2026-01-31 04:05:53.451604736 +0000 UTC m=+1146.138685185" lastFinishedPulling="2026-01-31 04:05:54.064834339 +0000 UTC m=+1146.751914788" observedRunningTime="2026-01-31 04:05:56.485020492 +0000 UTC m=+1149.172100941" watchObservedRunningTime="2026-01-31 04:05:56.486804592 +0000 UTC m=+1149.173885041" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.129282 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.223804 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpjnd\" (UniqueName: \"kubernetes.io/projected/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-kube-api-access-cpjnd\") pod \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.224211 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-config-data\") pod \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.224340 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-combined-ca-bundle\") pod \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.224481 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-scripts\") pod \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.224584 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-log-httpd\") pod \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.224686 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-sg-core-conf-yaml\") pod \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.224763 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-run-httpd\") pod \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\" (UID: \"d07cd025-0fc1-4536-978d-f7f7b0df2dbc\") " Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.224970 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d07cd025-0fc1-4536-978d-f7f7b0df2dbc" (UID: "d07cd025-0fc1-4536-978d-f7f7b0df2dbc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.225071 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d07cd025-0fc1-4536-978d-f7f7b0df2dbc" (UID: "d07cd025-0fc1-4536-978d-f7f7b0df2dbc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.225321 4827 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.225384 4827 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.230283 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-kube-api-access-cpjnd" (OuterVolumeSpecName: "kube-api-access-cpjnd") pod "d07cd025-0fc1-4536-978d-f7f7b0df2dbc" (UID: "d07cd025-0fc1-4536-978d-f7f7b0df2dbc"). InnerVolumeSpecName "kube-api-access-cpjnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.231090 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-scripts" (OuterVolumeSpecName: "scripts") pod "d07cd025-0fc1-4536-978d-f7f7b0df2dbc" (UID: "d07cd025-0fc1-4536-978d-f7f7b0df2dbc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.255831 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d07cd025-0fc1-4536-978d-f7f7b0df2dbc" (UID: "d07cd025-0fc1-4536-978d-f7f7b0df2dbc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.317404 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d07cd025-0fc1-4536-978d-f7f7b0df2dbc" (UID: "d07cd025-0fc1-4536-978d-f7f7b0df2dbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.326499 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpjnd\" (UniqueName: \"kubernetes.io/projected/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-kube-api-access-cpjnd\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.326537 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.326547 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.326554 4827 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.326618 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-config-data" (OuterVolumeSpecName: "config-data") pod "d07cd025-0fc1-4536-978d-f7f7b0df2dbc" (UID: "d07cd025-0fc1-4536-978d-f7f7b0df2dbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.428614 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d07cd025-0fc1-4536-978d-f7f7b0df2dbc-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.478159 4827 generic.go:334] "Generic (PLEG): container finished" podID="d07cd025-0fc1-4536-978d-f7f7b0df2dbc" containerID="047e9b3a0c8b60a604bcaa97a3f1ceb16772069ff3935941f313f5429b3499bb" exitCode=0 Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.478224 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.478291 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d07cd025-0fc1-4536-978d-f7f7b0df2dbc","Type":"ContainerDied","Data":"047e9b3a0c8b60a604bcaa97a3f1ceb16772069ff3935941f313f5429b3499bb"} Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.478353 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d07cd025-0fc1-4536-978d-f7f7b0df2dbc","Type":"ContainerDied","Data":"467546bd7d9d3236ff27d5484abaaa397708754379e3802460b1310ac453d4a3"} Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.478387 4827 scope.go:117] "RemoveContainer" containerID="aa7e117a3607dcf4797ded76871902888ee6defaf76c278d80c76286f73764e4" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.510315 4827 scope.go:117] "RemoveContainer" containerID="ead723a81fa378183110c6664cd4f76e0c922cdbb27cbfe04a57730874e602b2" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.525958 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.528114 4827 scope.go:117] "RemoveContainer" containerID="047e9b3a0c8b60a604bcaa97a3f1ceb16772069ff3935941f313f5429b3499bb" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.557471 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.566972 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:05:57 crc kubenswrapper[4827]: E0131 04:05:57.567536 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d07cd025-0fc1-4536-978d-f7f7b0df2dbc" containerName="sg-core" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.567567 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d07cd025-0fc1-4536-978d-f7f7b0df2dbc" containerName="sg-core" Jan 31 04:05:57 crc kubenswrapper[4827]: E0131 04:05:57.567591 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d07cd025-0fc1-4536-978d-f7f7b0df2dbc" containerName="proxy-httpd" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.567601 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d07cd025-0fc1-4536-978d-f7f7b0df2dbc" containerName="proxy-httpd" Jan 31 04:05:57 crc kubenswrapper[4827]: E0131 04:05:57.567625 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d07cd025-0fc1-4536-978d-f7f7b0df2dbc" containerName="ceilometer-notification-agent" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.567636 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d07cd025-0fc1-4536-978d-f7f7b0df2dbc" containerName="ceilometer-notification-agent" Jan 31 04:05:57 crc kubenswrapper[4827]: E0131 04:05:57.567676 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d07cd025-0fc1-4536-978d-f7f7b0df2dbc" containerName="ceilometer-central-agent" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.567690 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d07cd025-0fc1-4536-978d-f7f7b0df2dbc" containerName="ceilometer-central-agent" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.567952 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d07cd025-0fc1-4536-978d-f7f7b0df2dbc" containerName="proxy-httpd" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.567977 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d07cd025-0fc1-4536-978d-f7f7b0df2dbc" containerName="sg-core" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.568003 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d07cd025-0fc1-4536-978d-f7f7b0df2dbc" containerName="ceilometer-central-agent" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.568021 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d07cd025-0fc1-4536-978d-f7f7b0df2dbc" containerName="ceilometer-notification-agent" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.570647 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.573289 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.573428 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.574036 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.577554 4827 scope.go:117] "RemoveContainer" containerID="b988a2b47d57cf27a43165062e4e9bbf55bf7fd3c3576b318ac3691a779daa83" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.609853 4827 scope.go:117] "RemoveContainer" containerID="aa7e117a3607dcf4797ded76871902888ee6defaf76c278d80c76286f73764e4" Jan 31 04:05:57 crc kubenswrapper[4827]: E0131 04:05:57.610359 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa7e117a3607dcf4797ded76871902888ee6defaf76c278d80c76286f73764e4\": container with ID starting with aa7e117a3607dcf4797ded76871902888ee6defaf76c278d80c76286f73764e4 not found: ID does not exist" containerID="aa7e117a3607dcf4797ded76871902888ee6defaf76c278d80c76286f73764e4" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.610388 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa7e117a3607dcf4797ded76871902888ee6defaf76c278d80c76286f73764e4"} err="failed to get container status \"aa7e117a3607dcf4797ded76871902888ee6defaf76c278d80c76286f73764e4\": rpc error: code = NotFound desc = could not find container \"aa7e117a3607dcf4797ded76871902888ee6defaf76c278d80c76286f73764e4\": container with ID starting with aa7e117a3607dcf4797ded76871902888ee6defaf76c278d80c76286f73764e4 not found: ID does not exist" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.610416 4827 scope.go:117] "RemoveContainer" containerID="ead723a81fa378183110c6664cd4f76e0c922cdbb27cbfe04a57730874e602b2" Jan 31 04:05:57 crc kubenswrapper[4827]: E0131 04:05:57.610920 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ead723a81fa378183110c6664cd4f76e0c922cdbb27cbfe04a57730874e602b2\": container with ID starting with ead723a81fa378183110c6664cd4f76e0c922cdbb27cbfe04a57730874e602b2 not found: ID does not exist" containerID="ead723a81fa378183110c6664cd4f76e0c922cdbb27cbfe04a57730874e602b2" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.610969 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ead723a81fa378183110c6664cd4f76e0c922cdbb27cbfe04a57730874e602b2"} err="failed to get container status \"ead723a81fa378183110c6664cd4f76e0c922cdbb27cbfe04a57730874e602b2\": rpc error: code = NotFound desc = could not find container \"ead723a81fa378183110c6664cd4f76e0c922cdbb27cbfe04a57730874e602b2\": container with ID starting with ead723a81fa378183110c6664cd4f76e0c922cdbb27cbfe04a57730874e602b2 not found: ID does not exist" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.610999 4827 scope.go:117] "RemoveContainer" containerID="047e9b3a0c8b60a604bcaa97a3f1ceb16772069ff3935941f313f5429b3499bb" Jan 31 04:05:57 crc kubenswrapper[4827]: E0131 04:05:57.611271 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"047e9b3a0c8b60a604bcaa97a3f1ceb16772069ff3935941f313f5429b3499bb\": container with ID starting with 047e9b3a0c8b60a604bcaa97a3f1ceb16772069ff3935941f313f5429b3499bb not found: ID does not exist" containerID="047e9b3a0c8b60a604bcaa97a3f1ceb16772069ff3935941f313f5429b3499bb" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.611295 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047e9b3a0c8b60a604bcaa97a3f1ceb16772069ff3935941f313f5429b3499bb"} err="failed to get container status \"047e9b3a0c8b60a604bcaa97a3f1ceb16772069ff3935941f313f5429b3499bb\": rpc error: code = NotFound desc = could not find container \"047e9b3a0c8b60a604bcaa97a3f1ceb16772069ff3935941f313f5429b3499bb\": container with ID starting with 047e9b3a0c8b60a604bcaa97a3f1ceb16772069ff3935941f313f5429b3499bb not found: ID does not exist" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.611309 4827 scope.go:117] "RemoveContainer" containerID="b988a2b47d57cf27a43165062e4e9bbf55bf7fd3c3576b318ac3691a779daa83" Jan 31 04:05:57 crc kubenswrapper[4827]: E0131 04:05:57.611645 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b988a2b47d57cf27a43165062e4e9bbf55bf7fd3c3576b318ac3691a779daa83\": container with ID starting with b988a2b47d57cf27a43165062e4e9bbf55bf7fd3c3576b318ac3691a779daa83 not found: ID does not exist" containerID="b988a2b47d57cf27a43165062e4e9bbf55bf7fd3c3576b318ac3691a779daa83" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.611670 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b988a2b47d57cf27a43165062e4e9bbf55bf7fd3c3576b318ac3691a779daa83"} err="failed to get container status \"b988a2b47d57cf27a43165062e4e9bbf55bf7fd3c3576b318ac3691a779daa83\": rpc error: code = NotFound desc = could not find container \"b988a2b47d57cf27a43165062e4e9bbf55bf7fd3c3576b318ac3691a779daa83\": container with ID starting with b988a2b47d57cf27a43165062e4e9bbf55bf7fd3c3576b318ac3691a779daa83 not found: ID does not exist" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.735580 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e0967c2-b905-47da-a4ce-f632cad7714f-log-httpd\") pod \"ceilometer-0\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.735960 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-config-data\") pod \"ceilometer-0\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.736082 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlpr6\" (UniqueName: \"kubernetes.io/projected/9e0967c2-b905-47da-a4ce-f632cad7714f-kube-api-access-rlpr6\") pod \"ceilometer-0\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.736194 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.736266 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-scripts\") pod \"ceilometer-0\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.736348 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.736435 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e0967c2-b905-47da-a4ce-f632cad7714f-run-httpd\") pod \"ceilometer-0\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.838160 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e0967c2-b905-47da-a4ce-f632cad7714f-log-httpd\") pod \"ceilometer-0\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.838238 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-config-data\") pod \"ceilometer-0\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.838277 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlpr6\" (UniqueName: \"kubernetes.io/projected/9e0967c2-b905-47da-a4ce-f632cad7714f-kube-api-access-rlpr6\") pod \"ceilometer-0\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.838313 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.838336 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-scripts\") pod \"ceilometer-0\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.838362 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.838383 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e0967c2-b905-47da-a4ce-f632cad7714f-run-httpd\") pod \"ceilometer-0\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.838799 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e0967c2-b905-47da-a4ce-f632cad7714f-log-httpd\") pod \"ceilometer-0\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.838909 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e0967c2-b905-47da-a4ce-f632cad7714f-run-httpd\") pod \"ceilometer-0\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.842538 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.843192 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-config-data\") pod \"ceilometer-0\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.844308 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-scripts\") pod \"ceilometer-0\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.846788 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.861870 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlpr6\" (UniqueName: \"kubernetes.io/projected/9e0967c2-b905-47da-a4ce-f632cad7714f-kube-api-access-rlpr6\") pod \"ceilometer-0\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.899248 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:05:57 crc kubenswrapper[4827]: I0131 04:05:57.943667 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 04:05:58 crc kubenswrapper[4827]: I0131 04:05:58.124155 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d07cd025-0fc1-4536-978d-f7f7b0df2dbc" path="/var/lib/kubelet/pods/d07cd025-0fc1-4536-978d-f7f7b0df2dbc/volumes" Jan 31 04:05:58 crc kubenswrapper[4827]: I0131 04:05:58.389417 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:05:58 crc kubenswrapper[4827]: W0131 04:05:58.398908 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e0967c2_b905_47da_a4ce_f632cad7714f.slice/crio-d5b3390b4beed5007bc2436fa7fa1eaa65b58a4754d6c58d619fa8ac2e0b6699 WatchSource:0}: Error finding container d5b3390b4beed5007bc2436fa7fa1eaa65b58a4754d6c58d619fa8ac2e0b6699: Status 404 returned error can't find the container with id d5b3390b4beed5007bc2436fa7fa1eaa65b58a4754d6c58d619fa8ac2e0b6699 Jan 31 04:05:58 crc kubenswrapper[4827]: I0131 04:05:58.488199 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e0967c2-b905-47da-a4ce-f632cad7714f","Type":"ContainerStarted","Data":"d5b3390b4beed5007bc2436fa7fa1eaa65b58a4754d6c58d619fa8ac2e0b6699"} Jan 31 04:05:58 crc kubenswrapper[4827]: I0131 04:05:58.706763 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-87c57bc7d-fgwdw" podUID="2379f58c-95e4-4242-93de-82813ecbf089" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.148:9311/healthcheck\": dial tcp 10.217.0.148:9311: connect: connection refused" Jan 31 04:05:58 crc kubenswrapper[4827]: I0131 04:05:58.706767 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-87c57bc7d-fgwdw" podUID="2379f58c-95e4-4242-93de-82813ecbf089" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.148:9311/healthcheck\": dial tcp 10.217.0.148:9311: connect: connection refused" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.044357 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.169397 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2379f58c-95e4-4242-93de-82813ecbf089-logs\") pod \"2379f58c-95e4-4242-93de-82813ecbf089\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.169491 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-config-data-custom\") pod \"2379f58c-95e4-4242-93de-82813ecbf089\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.169510 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-combined-ca-bundle\") pod \"2379f58c-95e4-4242-93de-82813ecbf089\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.169536 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-public-tls-certs\") pod \"2379f58c-95e4-4242-93de-82813ecbf089\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.169646 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-config-data\") pod \"2379f58c-95e4-4242-93de-82813ecbf089\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.169699 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-internal-tls-certs\") pod \"2379f58c-95e4-4242-93de-82813ecbf089\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.169736 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6n6t\" (UniqueName: \"kubernetes.io/projected/2379f58c-95e4-4242-93de-82813ecbf089-kube-api-access-n6n6t\") pod \"2379f58c-95e4-4242-93de-82813ecbf089\" (UID: \"2379f58c-95e4-4242-93de-82813ecbf089\") " Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.171208 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2379f58c-95e4-4242-93de-82813ecbf089-logs" (OuterVolumeSpecName: "logs") pod "2379f58c-95e4-4242-93de-82813ecbf089" (UID: "2379f58c-95e4-4242-93de-82813ecbf089"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.175391 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2379f58c-95e4-4242-93de-82813ecbf089" (UID: "2379f58c-95e4-4242-93de-82813ecbf089"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.176557 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2379f58c-95e4-4242-93de-82813ecbf089-kube-api-access-n6n6t" (OuterVolumeSpecName: "kube-api-access-n6n6t") pod "2379f58c-95e4-4242-93de-82813ecbf089" (UID: "2379f58c-95e4-4242-93de-82813ecbf089"). InnerVolumeSpecName "kube-api-access-n6n6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.195181 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2379f58c-95e4-4242-93de-82813ecbf089" (UID: "2379f58c-95e4-4242-93de-82813ecbf089"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.212789 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2379f58c-95e4-4242-93de-82813ecbf089" (UID: "2379f58c-95e4-4242-93de-82813ecbf089"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.219732 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-config-data" (OuterVolumeSpecName: "config-data") pod "2379f58c-95e4-4242-93de-82813ecbf089" (UID: "2379f58c-95e4-4242-93de-82813ecbf089"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.220086 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2379f58c-95e4-4242-93de-82813ecbf089" (UID: "2379f58c-95e4-4242-93de-82813ecbf089"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.271362 4827 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.271390 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6n6t\" (UniqueName: \"kubernetes.io/projected/2379f58c-95e4-4242-93de-82813ecbf089-kube-api-access-n6n6t\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.271401 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2379f58c-95e4-4242-93de-82813ecbf089-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.271410 4827 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.271419 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.271429 4827 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.271437 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2379f58c-95e4-4242-93de-82813ecbf089-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.500826 4827 generic.go:334] "Generic (PLEG): container finished" podID="2379f58c-95e4-4242-93de-82813ecbf089" containerID="02091ba400a30568eb018890f1d435f65daf0a762f5dd961b208271015015ce4" exitCode=0 Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.500930 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-87c57bc7d-fgwdw" event={"ID":"2379f58c-95e4-4242-93de-82813ecbf089","Type":"ContainerDied","Data":"02091ba400a30568eb018890f1d435f65daf0a762f5dd961b208271015015ce4"} Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.500993 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-87c57bc7d-fgwdw" event={"ID":"2379f58c-95e4-4242-93de-82813ecbf089","Type":"ContainerDied","Data":"0f0482c8b6b5c476f2ae4b8e3ab37635ae89223b39ce4f473e9eeaec47e30c76"} Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.501022 4827 scope.go:117] "RemoveContainer" containerID="02091ba400a30568eb018890f1d435f65daf0a762f5dd961b208271015015ce4" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.501293 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-87c57bc7d-fgwdw" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.503937 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e0967c2-b905-47da-a4ce-f632cad7714f","Type":"ContainerStarted","Data":"816d71c6bc62b63f11dfd241fba24b42ef9312284feb160a67a4456c1092e7d0"} Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.523574 4827 scope.go:117] "RemoveContainer" containerID="6f0323ef39c21137a91ef37634c50b08484a3caca7ece0deb69af6fb171b8223" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.550134 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-87c57bc7d-fgwdw"] Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.558531 4827 scope.go:117] "RemoveContainer" containerID="02091ba400a30568eb018890f1d435f65daf0a762f5dd961b208271015015ce4" Jan 31 04:05:59 crc kubenswrapper[4827]: E0131 04:05:59.558974 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02091ba400a30568eb018890f1d435f65daf0a762f5dd961b208271015015ce4\": container with ID starting with 02091ba400a30568eb018890f1d435f65daf0a762f5dd961b208271015015ce4 not found: ID does not exist" containerID="02091ba400a30568eb018890f1d435f65daf0a762f5dd961b208271015015ce4" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.559004 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02091ba400a30568eb018890f1d435f65daf0a762f5dd961b208271015015ce4"} err="failed to get container status \"02091ba400a30568eb018890f1d435f65daf0a762f5dd961b208271015015ce4\": rpc error: code = NotFound desc = could not find container \"02091ba400a30568eb018890f1d435f65daf0a762f5dd961b208271015015ce4\": container with ID starting with 02091ba400a30568eb018890f1d435f65daf0a762f5dd961b208271015015ce4 not found: ID does not exist" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.559026 4827 scope.go:117] "RemoveContainer" containerID="6f0323ef39c21137a91ef37634c50b08484a3caca7ece0deb69af6fb171b8223" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.559028 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-87c57bc7d-fgwdw"] Jan 31 04:05:59 crc kubenswrapper[4827]: E0131 04:05:59.559337 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f0323ef39c21137a91ef37634c50b08484a3caca7ece0deb69af6fb171b8223\": container with ID starting with 6f0323ef39c21137a91ef37634c50b08484a3caca7ece0deb69af6fb171b8223 not found: ID does not exist" containerID="6f0323ef39c21137a91ef37634c50b08484a3caca7ece0deb69af6fb171b8223" Jan 31 04:05:59 crc kubenswrapper[4827]: I0131 04:05:59.559366 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f0323ef39c21137a91ef37634c50b08484a3caca7ece0deb69af6fb171b8223"} err="failed to get container status \"6f0323ef39c21137a91ef37634c50b08484a3caca7ece0deb69af6fb171b8223\": rpc error: code = NotFound desc = could not find container \"6f0323ef39c21137a91ef37634c50b08484a3caca7ece0deb69af6fb171b8223\": container with ID starting with 6f0323ef39c21137a91ef37634c50b08484a3caca7ece0deb69af6fb171b8223 not found: ID does not exist" Jan 31 04:06:00 crc kubenswrapper[4827]: I0131 04:06:00.124150 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2379f58c-95e4-4242-93de-82813ecbf089" path="/var/lib/kubelet/pods/2379f58c-95e4-4242-93de-82813ecbf089/volumes" Jan 31 04:06:00 crc kubenswrapper[4827]: I0131 04:06:00.516288 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e0967c2-b905-47da-a4ce-f632cad7714f","Type":"ContainerStarted","Data":"1fa65c35c8dab7f806c337af95e837bfc8b79c571793317c36c879667224f014"} Jan 31 04:06:00 crc kubenswrapper[4827]: I0131 04:06:00.516330 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e0967c2-b905-47da-a4ce-f632cad7714f","Type":"ContainerStarted","Data":"f4315b5d8dd768f05ef444a627bd1b6fc8f5a58e18f578dc8b1bfa967a848ea8"} Jan 31 04:06:02 crc kubenswrapper[4827]: I0131 04:06:02.464724 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-798dd656d-7874f" podUID="ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.141:9696/\": dial tcp 10.217.0.141:9696: connect: connection refused" Jan 31 04:06:02 crc kubenswrapper[4827]: I0131 04:06:02.997853 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.120238 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-qc6dq"] Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.121639 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" podUID="8273d85f-dd76-4bc7-a50e-54da87bb1927" containerName="dnsmasq-dns" containerID="cri-o://9e3910077181168834d3cc810b118195fab67b26c9b4c356c3c6473b4142581d" gracePeriod=10 Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.312397 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.362840 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.557703 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e0967c2-b905-47da-a4ce-f632cad7714f","Type":"ContainerStarted","Data":"f3ab2d09bb1e4d10b4e7cee1973c607fe01dea946a84dcaf9e5f9754253fced9"} Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.558064 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.559297 4827 generic.go:334] "Generic (PLEG): container finished" podID="8273d85f-dd76-4bc7-a50e-54da87bb1927" containerID="9e3910077181168834d3cc810b118195fab67b26c9b4c356c3c6473b4142581d" exitCode=0 Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.559329 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" event={"ID":"8273d85f-dd76-4bc7-a50e-54da87bb1927","Type":"ContainerDied","Data":"9e3910077181168834d3cc810b118195fab67b26c9b4c356c3c6473b4142581d"} Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.559505 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="cd3c35ef-997d-4511-9dec-08ee13ff1591" containerName="cinder-scheduler" containerID="cri-o://57b0296fecd24f3d20e970e14b2a5214000c1a1e001276e80a96b95ed8edf890" gracePeriod=30 Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.559605 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="cd3c35ef-997d-4511-9dec-08ee13ff1591" containerName="probe" containerID="cri-o://ce20c721c959c5920bae0b4b2db8e3297bad03b6bcf5b471f1c59a0c70840d79" gracePeriod=30 Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.581655 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.942251387 podStartE2EDuration="6.581638662s" podCreationTimestamp="2026-01-31 04:05:57 +0000 UTC" firstStartedPulling="2026-01-31 04:05:58.402814171 +0000 UTC m=+1151.089894630" lastFinishedPulling="2026-01-31 04:06:03.042201446 +0000 UTC m=+1155.729281905" observedRunningTime="2026-01-31 04:06:03.578711469 +0000 UTC m=+1156.265791918" watchObservedRunningTime="2026-01-31 04:06:03.581638662 +0000 UTC m=+1156.268719111" Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.612361 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.762363 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnhcl\" (UniqueName: \"kubernetes.io/projected/8273d85f-dd76-4bc7-a50e-54da87bb1927-kube-api-access-bnhcl\") pod \"8273d85f-dd76-4bc7-a50e-54da87bb1927\" (UID: \"8273d85f-dd76-4bc7-a50e-54da87bb1927\") " Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.762511 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-config\") pod \"8273d85f-dd76-4bc7-a50e-54da87bb1927\" (UID: \"8273d85f-dd76-4bc7-a50e-54da87bb1927\") " Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.762562 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-dns-svc\") pod \"8273d85f-dd76-4bc7-a50e-54da87bb1927\" (UID: \"8273d85f-dd76-4bc7-a50e-54da87bb1927\") " Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.762623 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-ovsdbserver-sb\") pod \"8273d85f-dd76-4bc7-a50e-54da87bb1927\" (UID: \"8273d85f-dd76-4bc7-a50e-54da87bb1927\") " Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.762703 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-ovsdbserver-nb\") pod \"8273d85f-dd76-4bc7-a50e-54da87bb1927\" (UID: \"8273d85f-dd76-4bc7-a50e-54da87bb1927\") " Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.766930 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8273d85f-dd76-4bc7-a50e-54da87bb1927-kube-api-access-bnhcl" (OuterVolumeSpecName: "kube-api-access-bnhcl") pod "8273d85f-dd76-4bc7-a50e-54da87bb1927" (UID: "8273d85f-dd76-4bc7-a50e-54da87bb1927"). InnerVolumeSpecName "kube-api-access-bnhcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.806425 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8273d85f-dd76-4bc7-a50e-54da87bb1927" (UID: "8273d85f-dd76-4bc7-a50e-54da87bb1927"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.810387 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8273d85f-dd76-4bc7-a50e-54da87bb1927" (UID: "8273d85f-dd76-4bc7-a50e-54da87bb1927"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.814513 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8273d85f-dd76-4bc7-a50e-54da87bb1927" (UID: "8273d85f-dd76-4bc7-a50e-54da87bb1927"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.816001 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-config" (OuterVolumeSpecName: "config") pod "8273d85f-dd76-4bc7-a50e-54da87bb1927" (UID: "8273d85f-dd76-4bc7-a50e-54da87bb1927"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.865005 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.865042 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnhcl\" (UniqueName: \"kubernetes.io/projected/8273d85f-dd76-4bc7-a50e-54da87bb1927-kube-api-access-bnhcl\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.865053 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.865063 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:03 crc kubenswrapper[4827]: I0131 04:06:03.865071 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8273d85f-dd76-4bc7-a50e-54da87bb1927-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:04 crc kubenswrapper[4827]: I0131 04:06:04.321258 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:06:04 crc kubenswrapper[4827]: I0131 04:06:04.568794 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" event={"ID":"8273d85f-dd76-4bc7-a50e-54da87bb1927","Type":"ContainerDied","Data":"f87e3866b3423b81f4b8023a7747c1f3104e4e14880faa89eeee55b62ae09be2"} Jan 31 04:06:04 crc kubenswrapper[4827]: I0131 04:06:04.568831 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-qc6dq" Jan 31 04:06:04 crc kubenswrapper[4827]: I0131 04:06:04.568848 4827 scope.go:117] "RemoveContainer" containerID="9e3910077181168834d3cc810b118195fab67b26c9b4c356c3c6473b4142581d" Jan 31 04:06:04 crc kubenswrapper[4827]: I0131 04:06:04.580330 4827 generic.go:334] "Generic (PLEG): container finished" podID="cd3c35ef-997d-4511-9dec-08ee13ff1591" containerID="ce20c721c959c5920bae0b4b2db8e3297bad03b6bcf5b471f1c59a0c70840d79" exitCode=0 Jan 31 04:06:04 crc kubenswrapper[4827]: I0131 04:06:04.580413 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cd3c35ef-997d-4511-9dec-08ee13ff1591","Type":"ContainerDied","Data":"ce20c721c959c5920bae0b4b2db8e3297bad03b6bcf5b471f1c59a0c70840d79"} Jan 31 04:06:04 crc kubenswrapper[4827]: I0131 04:06:04.591487 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-qc6dq"] Jan 31 04:06:04 crc kubenswrapper[4827]: I0131 04:06:04.593053 4827 scope.go:117] "RemoveContainer" containerID="3f352a0e0b9255260c8938215eff46c200a3b639dc8ce05eb34c629fbf7e37c0" Jan 31 04:06:04 crc kubenswrapper[4827]: I0131 04:06:04.619236 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-qc6dq"] Jan 31 04:06:05 crc kubenswrapper[4827]: I0131 04:06:05.144665 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 31 04:06:05 crc kubenswrapper[4827]: I0131 04:06:05.868472 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:06:05 crc kubenswrapper[4827]: I0131 04:06:05.869211 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.127210 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8273d85f-dd76-4bc7-a50e-54da87bb1927" path="/var/lib/kubelet/pods/8273d85f-dd76-4bc7-a50e-54da87bb1927/volumes" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.161236 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-77b487f776-cjb8n"] Jan 31 04:06:06 crc kubenswrapper[4827]: E0131 04:06:06.161646 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2379f58c-95e4-4242-93de-82813ecbf089" containerName="barbican-api-log" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.161704 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="2379f58c-95e4-4242-93de-82813ecbf089" containerName="barbican-api-log" Jan 31 04:06:06 crc kubenswrapper[4827]: E0131 04:06:06.161753 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2379f58c-95e4-4242-93de-82813ecbf089" containerName="barbican-api" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.161798 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="2379f58c-95e4-4242-93de-82813ecbf089" containerName="barbican-api" Jan 31 04:06:06 crc kubenswrapper[4827]: E0131 04:06:06.161902 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8273d85f-dd76-4bc7-a50e-54da87bb1927" containerName="dnsmasq-dns" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.161961 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="8273d85f-dd76-4bc7-a50e-54da87bb1927" containerName="dnsmasq-dns" Jan 31 04:06:06 crc kubenswrapper[4827]: E0131 04:06:06.163627 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8273d85f-dd76-4bc7-a50e-54da87bb1927" containerName="init" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.163722 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="8273d85f-dd76-4bc7-a50e-54da87bb1927" containerName="init" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.164022 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="8273d85f-dd76-4bc7-a50e-54da87bb1927" containerName="dnsmasq-dns" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.164093 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="2379f58c-95e4-4242-93de-82813ecbf089" containerName="barbican-api-log" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.164142 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="2379f58c-95e4-4242-93de-82813ecbf089" containerName="barbican-api" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.165225 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.174244 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.181057 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-77b487f776-cjb8n"] Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.308903 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-config-data\") pod \"cd3c35ef-997d-4511-9dec-08ee13ff1591\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.308986 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq7nq\" (UniqueName: \"kubernetes.io/projected/cd3c35ef-997d-4511-9dec-08ee13ff1591-kube-api-access-pq7nq\") pod \"cd3c35ef-997d-4511-9dec-08ee13ff1591\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.309156 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-scripts\") pod \"cd3c35ef-997d-4511-9dec-08ee13ff1591\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.309182 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-combined-ca-bundle\") pod \"cd3c35ef-997d-4511-9dec-08ee13ff1591\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.309226 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd3c35ef-997d-4511-9dec-08ee13ff1591-etc-machine-id\") pod \"cd3c35ef-997d-4511-9dec-08ee13ff1591\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.309323 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-config-data-custom\") pod \"cd3c35ef-997d-4511-9dec-08ee13ff1591\" (UID: \"cd3c35ef-997d-4511-9dec-08ee13ff1591\") " Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.309511 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb6gr\" (UniqueName: \"kubernetes.io/projected/c95d0b83-4630-47c3-ae7b-dae07d072e38-kube-api-access-nb6gr\") pod \"placement-77b487f776-cjb8n\" (UID: \"c95d0b83-4630-47c3-ae7b-dae07d072e38\") " pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.309555 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95d0b83-4630-47c3-ae7b-dae07d072e38-config-data\") pod \"placement-77b487f776-cjb8n\" (UID: \"c95d0b83-4630-47c3-ae7b-dae07d072e38\") " pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.309580 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c95d0b83-4630-47c3-ae7b-dae07d072e38-logs\") pod \"placement-77b487f776-cjb8n\" (UID: \"c95d0b83-4630-47c3-ae7b-dae07d072e38\") " pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.309597 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95d0b83-4630-47c3-ae7b-dae07d072e38-combined-ca-bundle\") pod \"placement-77b487f776-cjb8n\" (UID: \"c95d0b83-4630-47c3-ae7b-dae07d072e38\") " pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.309637 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95d0b83-4630-47c3-ae7b-dae07d072e38-internal-tls-certs\") pod \"placement-77b487f776-cjb8n\" (UID: \"c95d0b83-4630-47c3-ae7b-dae07d072e38\") " pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.309658 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c95d0b83-4630-47c3-ae7b-dae07d072e38-scripts\") pod \"placement-77b487f776-cjb8n\" (UID: \"c95d0b83-4630-47c3-ae7b-dae07d072e38\") " pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.309687 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95d0b83-4630-47c3-ae7b-dae07d072e38-public-tls-certs\") pod \"placement-77b487f776-cjb8n\" (UID: \"c95d0b83-4630-47c3-ae7b-dae07d072e38\") " pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.309964 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd3c35ef-997d-4511-9dec-08ee13ff1591-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cd3c35ef-997d-4511-9dec-08ee13ff1591" (UID: "cd3c35ef-997d-4511-9dec-08ee13ff1591"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.314936 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-scripts" (OuterVolumeSpecName: "scripts") pod "cd3c35ef-997d-4511-9dec-08ee13ff1591" (UID: "cd3c35ef-997d-4511-9dec-08ee13ff1591"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.318126 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd3c35ef-997d-4511-9dec-08ee13ff1591-kube-api-access-pq7nq" (OuterVolumeSpecName: "kube-api-access-pq7nq") pod "cd3c35ef-997d-4511-9dec-08ee13ff1591" (UID: "cd3c35ef-997d-4511-9dec-08ee13ff1591"). InnerVolumeSpecName "kube-api-access-pq7nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.328823 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cd3c35ef-997d-4511-9dec-08ee13ff1591" (UID: "cd3c35ef-997d-4511-9dec-08ee13ff1591"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.378263 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd3c35ef-997d-4511-9dec-08ee13ff1591" (UID: "cd3c35ef-997d-4511-9dec-08ee13ff1591"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.409780 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-config-data" (OuterVolumeSpecName: "config-data") pod "cd3c35ef-997d-4511-9dec-08ee13ff1591" (UID: "cd3c35ef-997d-4511-9dec-08ee13ff1591"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.410736 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb6gr\" (UniqueName: \"kubernetes.io/projected/c95d0b83-4630-47c3-ae7b-dae07d072e38-kube-api-access-nb6gr\") pod \"placement-77b487f776-cjb8n\" (UID: \"c95d0b83-4630-47c3-ae7b-dae07d072e38\") " pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.410796 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95d0b83-4630-47c3-ae7b-dae07d072e38-config-data\") pod \"placement-77b487f776-cjb8n\" (UID: \"c95d0b83-4630-47c3-ae7b-dae07d072e38\") " pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.410844 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c95d0b83-4630-47c3-ae7b-dae07d072e38-logs\") pod \"placement-77b487f776-cjb8n\" (UID: \"c95d0b83-4630-47c3-ae7b-dae07d072e38\") " pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.410864 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95d0b83-4630-47c3-ae7b-dae07d072e38-combined-ca-bundle\") pod \"placement-77b487f776-cjb8n\" (UID: \"c95d0b83-4630-47c3-ae7b-dae07d072e38\") " pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.410956 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95d0b83-4630-47c3-ae7b-dae07d072e38-internal-tls-certs\") pod \"placement-77b487f776-cjb8n\" (UID: \"c95d0b83-4630-47c3-ae7b-dae07d072e38\") " pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.410975 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c95d0b83-4630-47c3-ae7b-dae07d072e38-scripts\") pod \"placement-77b487f776-cjb8n\" (UID: \"c95d0b83-4630-47c3-ae7b-dae07d072e38\") " pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.411006 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95d0b83-4630-47c3-ae7b-dae07d072e38-public-tls-certs\") pod \"placement-77b487f776-cjb8n\" (UID: \"c95d0b83-4630-47c3-ae7b-dae07d072e38\") " pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.411080 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq7nq\" (UniqueName: \"kubernetes.io/projected/cd3c35ef-997d-4511-9dec-08ee13ff1591-kube-api-access-pq7nq\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.411093 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.411106 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.411119 4827 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd3c35ef-997d-4511-9dec-08ee13ff1591-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.411132 4827 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.411144 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd3c35ef-997d-4511-9dec-08ee13ff1591-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.411726 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c95d0b83-4630-47c3-ae7b-dae07d072e38-logs\") pod \"placement-77b487f776-cjb8n\" (UID: \"c95d0b83-4630-47c3-ae7b-dae07d072e38\") " pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.419989 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c95d0b83-4630-47c3-ae7b-dae07d072e38-scripts\") pod \"placement-77b487f776-cjb8n\" (UID: \"c95d0b83-4630-47c3-ae7b-dae07d072e38\") " pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.420051 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95d0b83-4630-47c3-ae7b-dae07d072e38-combined-ca-bundle\") pod \"placement-77b487f776-cjb8n\" (UID: \"c95d0b83-4630-47c3-ae7b-dae07d072e38\") " pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.420065 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95d0b83-4630-47c3-ae7b-dae07d072e38-internal-tls-certs\") pod \"placement-77b487f776-cjb8n\" (UID: \"c95d0b83-4630-47c3-ae7b-dae07d072e38\") " pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.420055 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95d0b83-4630-47c3-ae7b-dae07d072e38-config-data\") pod \"placement-77b487f776-cjb8n\" (UID: \"c95d0b83-4630-47c3-ae7b-dae07d072e38\") " pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.421220 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95d0b83-4630-47c3-ae7b-dae07d072e38-public-tls-certs\") pod \"placement-77b487f776-cjb8n\" (UID: \"c95d0b83-4630-47c3-ae7b-dae07d072e38\") " pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.427592 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb6gr\" (UniqueName: \"kubernetes.io/projected/c95d0b83-4630-47c3-ae7b-dae07d072e38-kube-api-access-nb6gr\") pod \"placement-77b487f776-cjb8n\" (UID: \"c95d0b83-4630-47c3-ae7b-dae07d072e38\") " pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.489460 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.624771 4827 generic.go:334] "Generic (PLEG): container finished" podID="cd3c35ef-997d-4511-9dec-08ee13ff1591" containerID="57b0296fecd24f3d20e970e14b2a5214000c1a1e001276e80a96b95ed8edf890" exitCode=0 Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.624816 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cd3c35ef-997d-4511-9dec-08ee13ff1591","Type":"ContainerDied","Data":"57b0296fecd24f3d20e970e14b2a5214000c1a1e001276e80a96b95ed8edf890"} Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.624864 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.624891 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cd3c35ef-997d-4511-9dec-08ee13ff1591","Type":"ContainerDied","Data":"18889c5e63fb8babcd37fad0ab7549c303765fad7c311714c14549be9601159c"} Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.624912 4827 scope.go:117] "RemoveContainer" containerID="ce20c721c959c5920bae0b4b2db8e3297bad03b6bcf5b471f1c59a0c70840d79" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.657554 4827 scope.go:117] "RemoveContainer" containerID="57b0296fecd24f3d20e970e14b2a5214000c1a1e001276e80a96b95ed8edf890" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.668475 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.681438 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.693164 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 04:06:06 crc kubenswrapper[4827]: E0131 04:06:06.693578 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3c35ef-997d-4511-9dec-08ee13ff1591" containerName="cinder-scheduler" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.693638 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3c35ef-997d-4511-9dec-08ee13ff1591" containerName="cinder-scheduler" Jan 31 04:06:06 crc kubenswrapper[4827]: E0131 04:06:06.693697 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3c35ef-997d-4511-9dec-08ee13ff1591" containerName="probe" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.693708 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3c35ef-997d-4511-9dec-08ee13ff1591" containerName="probe" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.693958 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd3c35ef-997d-4511-9dec-08ee13ff1591" containerName="probe" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.694012 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd3c35ef-997d-4511-9dec-08ee13ff1591" containerName="cinder-scheduler" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.695012 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.696963 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.708517 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.728947 4827 scope.go:117] "RemoveContainer" containerID="ce20c721c959c5920bae0b4b2db8e3297bad03b6bcf5b471f1c59a0c70840d79" Jan 31 04:06:06 crc kubenswrapper[4827]: E0131 04:06:06.732557 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce20c721c959c5920bae0b4b2db8e3297bad03b6bcf5b471f1c59a0c70840d79\": container with ID starting with ce20c721c959c5920bae0b4b2db8e3297bad03b6bcf5b471f1c59a0c70840d79 not found: ID does not exist" containerID="ce20c721c959c5920bae0b4b2db8e3297bad03b6bcf5b471f1c59a0c70840d79" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.732614 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce20c721c959c5920bae0b4b2db8e3297bad03b6bcf5b471f1c59a0c70840d79"} err="failed to get container status \"ce20c721c959c5920bae0b4b2db8e3297bad03b6bcf5b471f1c59a0c70840d79\": rpc error: code = NotFound desc = could not find container \"ce20c721c959c5920bae0b4b2db8e3297bad03b6bcf5b471f1c59a0c70840d79\": container with ID starting with ce20c721c959c5920bae0b4b2db8e3297bad03b6bcf5b471f1c59a0c70840d79 not found: ID does not exist" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.732641 4827 scope.go:117] "RemoveContainer" containerID="57b0296fecd24f3d20e970e14b2a5214000c1a1e001276e80a96b95ed8edf890" Jan 31 04:06:06 crc kubenswrapper[4827]: E0131 04:06:06.733474 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57b0296fecd24f3d20e970e14b2a5214000c1a1e001276e80a96b95ed8edf890\": container with ID starting with 57b0296fecd24f3d20e970e14b2a5214000c1a1e001276e80a96b95ed8edf890 not found: ID does not exist" containerID="57b0296fecd24f3d20e970e14b2a5214000c1a1e001276e80a96b95ed8edf890" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.733616 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57b0296fecd24f3d20e970e14b2a5214000c1a1e001276e80a96b95ed8edf890"} err="failed to get container status \"57b0296fecd24f3d20e970e14b2a5214000c1a1e001276e80a96b95ed8edf890\": rpc error: code = NotFound desc = could not find container \"57b0296fecd24f3d20e970e14b2a5214000c1a1e001276e80a96b95ed8edf890\": container with ID starting with 57b0296fecd24f3d20e970e14b2a5214000c1a1e001276e80a96b95ed8edf890 not found: ID does not exist" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.818934 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3a82636-a800-49e2-b3f7-f253d069722c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d3a82636-a800-49e2-b3f7-f253d069722c\") " pod="openstack/cinder-scheduler-0" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.819020 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3a82636-a800-49e2-b3f7-f253d069722c-scripts\") pod \"cinder-scheduler-0\" (UID: \"d3a82636-a800-49e2-b3f7-f253d069722c\") " pod="openstack/cinder-scheduler-0" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.819040 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3a82636-a800-49e2-b3f7-f253d069722c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d3a82636-a800-49e2-b3f7-f253d069722c\") " pod="openstack/cinder-scheduler-0" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.819062 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3a82636-a800-49e2-b3f7-f253d069722c-config-data\") pod \"cinder-scheduler-0\" (UID: \"d3a82636-a800-49e2-b3f7-f253d069722c\") " pod="openstack/cinder-scheduler-0" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.819142 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d3a82636-a800-49e2-b3f7-f253d069722c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d3a82636-a800-49e2-b3f7-f253d069722c\") " pod="openstack/cinder-scheduler-0" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.819163 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szkm7\" (UniqueName: \"kubernetes.io/projected/d3a82636-a800-49e2-b3f7-f253d069722c-kube-api-access-szkm7\") pod \"cinder-scheduler-0\" (UID: \"d3a82636-a800-49e2-b3f7-f253d069722c\") " pod="openstack/cinder-scheduler-0" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.920380 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d3a82636-a800-49e2-b3f7-f253d069722c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d3a82636-a800-49e2-b3f7-f253d069722c\") " pod="openstack/cinder-scheduler-0" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.920429 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szkm7\" (UniqueName: \"kubernetes.io/projected/d3a82636-a800-49e2-b3f7-f253d069722c-kube-api-access-szkm7\") pod \"cinder-scheduler-0\" (UID: \"d3a82636-a800-49e2-b3f7-f253d069722c\") " pod="openstack/cinder-scheduler-0" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.920497 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3a82636-a800-49e2-b3f7-f253d069722c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d3a82636-a800-49e2-b3f7-f253d069722c\") " pod="openstack/cinder-scheduler-0" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.920526 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d3a82636-a800-49e2-b3f7-f253d069722c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d3a82636-a800-49e2-b3f7-f253d069722c\") " pod="openstack/cinder-scheduler-0" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.920551 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3a82636-a800-49e2-b3f7-f253d069722c-scripts\") pod \"cinder-scheduler-0\" (UID: \"d3a82636-a800-49e2-b3f7-f253d069722c\") " pod="openstack/cinder-scheduler-0" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.920627 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3a82636-a800-49e2-b3f7-f253d069722c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d3a82636-a800-49e2-b3f7-f253d069722c\") " pod="openstack/cinder-scheduler-0" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.920680 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3a82636-a800-49e2-b3f7-f253d069722c-config-data\") pod \"cinder-scheduler-0\" (UID: \"d3a82636-a800-49e2-b3f7-f253d069722c\") " pod="openstack/cinder-scheduler-0" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.925297 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3a82636-a800-49e2-b3f7-f253d069722c-scripts\") pod \"cinder-scheduler-0\" (UID: \"d3a82636-a800-49e2-b3f7-f253d069722c\") " pod="openstack/cinder-scheduler-0" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.925434 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3a82636-a800-49e2-b3f7-f253d069722c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d3a82636-a800-49e2-b3f7-f253d069722c\") " pod="openstack/cinder-scheduler-0" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.925710 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3a82636-a800-49e2-b3f7-f253d069722c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d3a82636-a800-49e2-b3f7-f253d069722c\") " pod="openstack/cinder-scheduler-0" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.926141 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3a82636-a800-49e2-b3f7-f253d069722c-config-data\") pod \"cinder-scheduler-0\" (UID: \"d3a82636-a800-49e2-b3f7-f253d069722c\") " pod="openstack/cinder-scheduler-0" Jan 31 04:06:06 crc kubenswrapper[4827]: I0131 04:06:06.936874 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szkm7\" (UniqueName: \"kubernetes.io/projected/d3a82636-a800-49e2-b3f7-f253d069722c-kube-api-access-szkm7\") pod \"cinder-scheduler-0\" (UID: \"d3a82636-a800-49e2-b3f7-f253d069722c\") " pod="openstack/cinder-scheduler-0" Jan 31 04:06:07 crc kubenswrapper[4827]: I0131 04:06:07.036210 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 04:06:07 crc kubenswrapper[4827]: I0131 04:06:07.057917 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-77b487f776-cjb8n"] Jan 31 04:06:07 crc kubenswrapper[4827]: W0131 04:06:07.066079 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc95d0b83_4630_47c3_ae7b_dae07d072e38.slice/crio-98dbec3f22e71abcb7df90473bf6a3a0c88e6cff6bf2781bb79e0c8f291d0515 WatchSource:0}: Error finding container 98dbec3f22e71abcb7df90473bf6a3a0c88e6cff6bf2781bb79e0c8f291d0515: Status 404 returned error can't find the container with id 98dbec3f22e71abcb7df90473bf6a3a0c88e6cff6bf2781bb79e0c8f291d0515 Jan 31 04:06:07 crc kubenswrapper[4827]: W0131 04:06:07.483647 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3a82636_a800_49e2_b3f7_f253d069722c.slice/crio-fd648463510b7e28a852e66e51b3c8d5ec8e60a0a6790cfa82cb10f179d37714 WatchSource:0}: Error finding container fd648463510b7e28a852e66e51b3c8d5ec8e60a0a6790cfa82cb10f179d37714: Status 404 returned error can't find the container with id fd648463510b7e28a852e66e51b3c8d5ec8e60a0a6790cfa82cb10f179d37714 Jan 31 04:06:07 crc kubenswrapper[4827]: I0131 04:06:07.484062 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 04:06:07 crc kubenswrapper[4827]: I0131 04:06:07.644346 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d3a82636-a800-49e2-b3f7-f253d069722c","Type":"ContainerStarted","Data":"fd648463510b7e28a852e66e51b3c8d5ec8e60a0a6790cfa82cb10f179d37714"} Jan 31 04:06:07 crc kubenswrapper[4827]: I0131 04:06:07.646204 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-77b487f776-cjb8n" event={"ID":"c95d0b83-4630-47c3-ae7b-dae07d072e38","Type":"ContainerStarted","Data":"04f7b06121eb11aa70437828ce9210a089d72018b844b67bd26ba9079ee058d6"} Jan 31 04:06:07 crc kubenswrapper[4827]: I0131 04:06:07.646249 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-77b487f776-cjb8n" event={"ID":"c95d0b83-4630-47c3-ae7b-dae07d072e38","Type":"ContainerStarted","Data":"2b4310c94deb9fec27b3eb195883a446c64321024be57ac69957e80b5967d22e"} Jan 31 04:06:07 crc kubenswrapper[4827]: I0131 04:06:07.646268 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-77b487f776-cjb8n" event={"ID":"c95d0b83-4630-47c3-ae7b-dae07d072e38","Type":"ContainerStarted","Data":"98dbec3f22e71abcb7df90473bf6a3a0c88e6cff6bf2781bb79e0c8f291d0515"} Jan 31 04:06:07 crc kubenswrapper[4827]: I0131 04:06:07.646510 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:07 crc kubenswrapper[4827]: I0131 04:06:07.646740 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:07 crc kubenswrapper[4827]: I0131 04:06:07.682077 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-77b487f776-cjb8n" podStartSLOduration=1.682052833 podStartE2EDuration="1.682052833s" podCreationTimestamp="2026-01-31 04:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:06:07.681256031 +0000 UTC m=+1160.368336500" watchObservedRunningTime="2026-01-31 04:06:07.682052833 +0000 UTC m=+1160.369133292" Jan 31 04:06:08 crc kubenswrapper[4827]: I0131 04:06:08.121535 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd3c35ef-997d-4511-9dec-08ee13ff1591" path="/var/lib/kubelet/pods/cd3c35ef-997d-4511-9dec-08ee13ff1591/volumes" Jan 31 04:06:08 crc kubenswrapper[4827]: I0131 04:06:08.658615 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d3a82636-a800-49e2-b3f7-f253d069722c","Type":"ContainerStarted","Data":"2d5d49d9f83379fe0fd96da0d010367036fd6e481a0f04575e78841705c02149"} Jan 31 04:06:08 crc kubenswrapper[4827]: I0131 04:06:08.658979 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d3a82636-a800-49e2-b3f7-f253d069722c","Type":"ContainerStarted","Data":"4e70b4c6b02302451f02bef41bd95ca5b1d170793e237188dc287cf89f7886ad"} Jan 31 04:06:10 crc kubenswrapper[4827]: I0131 04:06:10.126145 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7969d585-whgv9" Jan 31 04:06:10 crc kubenswrapper[4827]: I0131 04:06:10.168312 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.168292373 podStartE2EDuration="4.168292373s" podCreationTimestamp="2026-01-31 04:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:06:08.676735585 +0000 UTC m=+1161.363816044" watchObservedRunningTime="2026-01-31 04:06:10.168292373 +0000 UTC m=+1162.855372822" Jan 31 04:06:10 crc kubenswrapper[4827]: I0131 04:06:10.201669 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-68ddbc68f-gxl56"] Jan 31 04:06:10 crc kubenswrapper[4827]: I0131 04:06:10.202140 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-68ddbc68f-gxl56" podUID="89663bcc-cc29-44ed-a65e-ab5e4efa7813" containerName="neutron-api" containerID="cri-o://c63d544db6ef2f9ec5dd916553df131f5e2b01a8577e9da64548566e95faa3ea" gracePeriod=30 Jan 31 04:06:10 crc kubenswrapper[4827]: I0131 04:06:10.202376 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-68ddbc68f-gxl56" podUID="89663bcc-cc29-44ed-a65e-ab5e4efa7813" containerName="neutron-httpd" containerID="cri-o://ada02e27206a0f8f3e8255ff494c5a081cceaa808a4124f6957c499782b5fa5b" gracePeriod=30 Jan 31 04:06:10 crc kubenswrapper[4827]: I0131 04:06:10.678778 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-b7768667c-2kxv5" Jan 31 04:06:10 crc kubenswrapper[4827]: I0131 04:06:10.686034 4827 generic.go:334] "Generic (PLEG): container finished" podID="89663bcc-cc29-44ed-a65e-ab5e4efa7813" containerID="ada02e27206a0f8f3e8255ff494c5a081cceaa808a4124f6957c499782b5fa5b" exitCode=0 Jan 31 04:06:10 crc kubenswrapper[4827]: I0131 04:06:10.686078 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68ddbc68f-gxl56" event={"ID":"89663bcc-cc29-44ed-a65e-ab5e4efa7813","Type":"ContainerDied","Data":"ada02e27206a0f8f3e8255ff494c5a081cceaa808a4124f6957c499782b5fa5b"} Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.595779 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-798dd656d-7874f_ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de/neutron-api/0.log" Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.596418 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-798dd656d-7874f" Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.694538 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-798dd656d-7874f_ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de/neutron-api/0.log" Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.694581 4827 generic.go:334] "Generic (PLEG): container finished" podID="ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de" containerID="deec999deac645424177fe16ab363dada849390ca10cd2d7eaec06dc147b6df1" exitCode=137 Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.694609 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-798dd656d-7874f" event={"ID":"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de","Type":"ContainerDied","Data":"deec999deac645424177fe16ab363dada849390ca10cd2d7eaec06dc147b6df1"} Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.694634 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-798dd656d-7874f" event={"ID":"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de","Type":"ContainerDied","Data":"54957c6c68032f1e6c0c3d084a0359ddaf3175f8981e66edb0c9022270bc8b6b"} Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.694649 4827 scope.go:117] "RemoveContainer" containerID="1715870ab3ad43f98614002eb9c0fbd66bae86854cfd63cd4e476fb8f3179f74" Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.694756 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-798dd656d-7874f" Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.718092 4827 scope.go:117] "RemoveContainer" containerID="deec999deac645424177fe16ab363dada849390ca10cd2d7eaec06dc147b6df1" Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.726535 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-httpd-config\") pod \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\" (UID: \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\") " Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.726648 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-ovndb-tls-certs\") pod \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\" (UID: \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\") " Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.726698 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-combined-ca-bundle\") pod \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\" (UID: \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\") " Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.726740 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-config\") pod \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\" (UID: \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\") " Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.726819 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd6vb\" (UniqueName: \"kubernetes.io/projected/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-kube-api-access-zd6vb\") pod \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\" (UID: \"ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de\") " Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.732177 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-kube-api-access-zd6vb" (OuterVolumeSpecName: "kube-api-access-zd6vb") pod "ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de" (UID: "ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de"). InnerVolumeSpecName "kube-api-access-zd6vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.734517 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de" (UID: "ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.747380 4827 scope.go:117] "RemoveContainer" containerID="1715870ab3ad43f98614002eb9c0fbd66bae86854cfd63cd4e476fb8f3179f74" Jan 31 04:06:11 crc kubenswrapper[4827]: E0131 04:06:11.750215 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1715870ab3ad43f98614002eb9c0fbd66bae86854cfd63cd4e476fb8f3179f74\": container with ID starting with 1715870ab3ad43f98614002eb9c0fbd66bae86854cfd63cd4e476fb8f3179f74 not found: ID does not exist" containerID="1715870ab3ad43f98614002eb9c0fbd66bae86854cfd63cd4e476fb8f3179f74" Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.750246 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1715870ab3ad43f98614002eb9c0fbd66bae86854cfd63cd4e476fb8f3179f74"} err="failed to get container status \"1715870ab3ad43f98614002eb9c0fbd66bae86854cfd63cd4e476fb8f3179f74\": rpc error: code = NotFound desc = could not find container \"1715870ab3ad43f98614002eb9c0fbd66bae86854cfd63cd4e476fb8f3179f74\": container with ID starting with 1715870ab3ad43f98614002eb9c0fbd66bae86854cfd63cd4e476fb8f3179f74 not found: ID does not exist" Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.750266 4827 scope.go:117] "RemoveContainer" containerID="deec999deac645424177fe16ab363dada849390ca10cd2d7eaec06dc147b6df1" Jan 31 04:06:11 crc kubenswrapper[4827]: E0131 04:06:11.750555 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deec999deac645424177fe16ab363dada849390ca10cd2d7eaec06dc147b6df1\": container with ID starting with deec999deac645424177fe16ab363dada849390ca10cd2d7eaec06dc147b6df1 not found: ID does not exist" containerID="deec999deac645424177fe16ab363dada849390ca10cd2d7eaec06dc147b6df1" Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.750579 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deec999deac645424177fe16ab363dada849390ca10cd2d7eaec06dc147b6df1"} err="failed to get container status \"deec999deac645424177fe16ab363dada849390ca10cd2d7eaec06dc147b6df1\": rpc error: code = NotFound desc = could not find container \"deec999deac645424177fe16ab363dada849390ca10cd2d7eaec06dc147b6df1\": container with ID starting with deec999deac645424177fe16ab363dada849390ca10cd2d7eaec06dc147b6df1 not found: ID does not exist" Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.784005 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-config" (OuterVolumeSpecName: "config") pod "ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de" (UID: "ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.786035 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de" (UID: "ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.807162 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de" (UID: "ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.828897 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd6vb\" (UniqueName: \"kubernetes.io/projected/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-kube-api-access-zd6vb\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.828922 4827 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.828932 4827 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.828942 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:11 crc kubenswrapper[4827]: I0131 04:06:11.828949 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:12 crc kubenswrapper[4827]: I0131 04:06:12.029144 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-798dd656d-7874f"] Jan 31 04:06:12 crc kubenswrapper[4827]: I0131 04:06:12.036578 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 04:06:12 crc kubenswrapper[4827]: I0131 04:06:12.037673 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-798dd656d-7874f"] Jan 31 04:06:12 crc kubenswrapper[4827]: I0131 04:06:12.120357 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de" path="/var/lib/kubelet/pods/ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de/volumes" Jan 31 04:06:12 crc kubenswrapper[4827]: I0131 04:06:12.708071 4827 generic.go:334] "Generic (PLEG): container finished" podID="89663bcc-cc29-44ed-a65e-ab5e4efa7813" containerID="c63d544db6ef2f9ec5dd916553df131f5e2b01a8577e9da64548566e95faa3ea" exitCode=0 Jan 31 04:06:12 crc kubenswrapper[4827]: I0131 04:06:12.708276 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68ddbc68f-gxl56" event={"ID":"89663bcc-cc29-44ed-a65e-ab5e4efa7813","Type":"ContainerDied","Data":"c63d544db6ef2f9ec5dd916553df131f5e2b01a8577e9da64548566e95faa3ea"} Jan 31 04:06:12 crc kubenswrapper[4827]: I0131 04:06:12.808228 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:06:12 crc kubenswrapper[4827]: I0131 04:06:12.947547 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-httpd-config\") pod \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " Jan 31 04:06:12 crc kubenswrapper[4827]: I0131 04:06:12.947607 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-public-tls-certs\") pod \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " Jan 31 04:06:12 crc kubenswrapper[4827]: I0131 04:06:12.947675 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-combined-ca-bundle\") pod \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " Jan 31 04:06:12 crc kubenswrapper[4827]: I0131 04:06:12.947723 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-config\") pod \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " Jan 31 04:06:12 crc kubenswrapper[4827]: I0131 04:06:12.947812 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbcn5\" (UniqueName: \"kubernetes.io/projected/89663bcc-cc29-44ed-a65e-ab5e4efa7813-kube-api-access-jbcn5\") pod \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " Jan 31 04:06:12 crc kubenswrapper[4827]: I0131 04:06:12.947857 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-ovndb-tls-certs\") pod \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " Jan 31 04:06:12 crc kubenswrapper[4827]: I0131 04:06:12.947928 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-internal-tls-certs\") pod \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\" (UID: \"89663bcc-cc29-44ed-a65e-ab5e4efa7813\") " Jan 31 04:06:12 crc kubenswrapper[4827]: I0131 04:06:12.953482 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89663bcc-cc29-44ed-a65e-ab5e4efa7813-kube-api-access-jbcn5" (OuterVolumeSpecName: "kube-api-access-jbcn5") pod "89663bcc-cc29-44ed-a65e-ab5e4efa7813" (UID: "89663bcc-cc29-44ed-a65e-ab5e4efa7813"). InnerVolumeSpecName "kube-api-access-jbcn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:12 crc kubenswrapper[4827]: I0131 04:06:12.953557 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "89663bcc-cc29-44ed-a65e-ab5e4efa7813" (UID: "89663bcc-cc29-44ed-a65e-ab5e4efa7813"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:12 crc kubenswrapper[4827]: I0131 04:06:12.992132 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "89663bcc-cc29-44ed-a65e-ab5e4efa7813" (UID: "89663bcc-cc29-44ed-a65e-ab5e4efa7813"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:13 crc kubenswrapper[4827]: I0131 04:06:13.003549 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-config" (OuterVolumeSpecName: "config") pod "89663bcc-cc29-44ed-a65e-ab5e4efa7813" (UID: "89663bcc-cc29-44ed-a65e-ab5e4efa7813"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:13 crc kubenswrapper[4827]: I0131 04:06:13.008949 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89663bcc-cc29-44ed-a65e-ab5e4efa7813" (UID: "89663bcc-cc29-44ed-a65e-ab5e4efa7813"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:13 crc kubenswrapper[4827]: I0131 04:06:13.010137 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "89663bcc-cc29-44ed-a65e-ab5e4efa7813" (UID: "89663bcc-cc29-44ed-a65e-ab5e4efa7813"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:13 crc kubenswrapper[4827]: I0131 04:06:13.033648 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "89663bcc-cc29-44ed-a65e-ab5e4efa7813" (UID: "89663bcc-cc29-44ed-a65e-ab5e4efa7813"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:13 crc kubenswrapper[4827]: I0131 04:06:13.049459 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:13 crc kubenswrapper[4827]: I0131 04:06:13.049488 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:13 crc kubenswrapper[4827]: I0131 04:06:13.049498 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbcn5\" (UniqueName: \"kubernetes.io/projected/89663bcc-cc29-44ed-a65e-ab5e4efa7813-kube-api-access-jbcn5\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:13 crc kubenswrapper[4827]: I0131 04:06:13.049508 4827 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:13 crc kubenswrapper[4827]: I0131 04:06:13.049517 4827 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:13 crc kubenswrapper[4827]: I0131 04:06:13.049525 4827 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:13 crc kubenswrapper[4827]: I0131 04:06:13.049533 4827 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89663bcc-cc29-44ed-a65e-ab5e4efa7813-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:13 crc kubenswrapper[4827]: I0131 04:06:13.717222 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68ddbc68f-gxl56" event={"ID":"89663bcc-cc29-44ed-a65e-ab5e4efa7813","Type":"ContainerDied","Data":"4076dd8979cb02b9ac86e1ec74a0514a3113f4f37ed2cb8bbdd9048b54b98f9b"} Jan 31 04:06:13 crc kubenswrapper[4827]: I0131 04:06:13.717273 4827 scope.go:117] "RemoveContainer" containerID="ada02e27206a0f8f3e8255ff494c5a081cceaa808a4124f6957c499782b5fa5b" Jan 31 04:06:13 crc kubenswrapper[4827]: I0131 04:06:13.717378 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68ddbc68f-gxl56" Jan 31 04:06:13 crc kubenswrapper[4827]: I0131 04:06:13.738590 4827 scope.go:117] "RemoveContainer" containerID="c63d544db6ef2f9ec5dd916553df131f5e2b01a8577e9da64548566e95faa3ea" Jan 31 04:06:13 crc kubenswrapper[4827]: I0131 04:06:13.758508 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-68ddbc68f-gxl56"] Jan 31 04:06:13 crc kubenswrapper[4827]: I0131 04:06:13.769283 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-68ddbc68f-gxl56"] Jan 31 04:06:14 crc kubenswrapper[4827]: I0131 04:06:14.121512 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89663bcc-cc29-44ed-a65e-ab5e4efa7813" path="/var/lib/kubelet/pods/89663bcc-cc29-44ed-a65e-ab5e4efa7813/volumes" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.620612 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 31 04:06:15 crc kubenswrapper[4827]: E0131 04:06:15.621175 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de" containerName="neutron-api" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.621190 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de" containerName="neutron-api" Jan 31 04:06:15 crc kubenswrapper[4827]: E0131 04:06:15.621203 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89663bcc-cc29-44ed-a65e-ab5e4efa7813" containerName="neutron-httpd" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.621209 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="89663bcc-cc29-44ed-a65e-ab5e4efa7813" containerName="neutron-httpd" Jan 31 04:06:15 crc kubenswrapper[4827]: E0131 04:06:15.621230 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de" containerName="neutron-httpd" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.621236 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de" containerName="neutron-httpd" Jan 31 04:06:15 crc kubenswrapper[4827]: E0131 04:06:15.621250 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89663bcc-cc29-44ed-a65e-ab5e4efa7813" containerName="neutron-api" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.621255 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="89663bcc-cc29-44ed-a65e-ab5e4efa7813" containerName="neutron-api" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.621431 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de" containerName="neutron-api" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.621447 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba9b68b9-d7bc-4c9c-90d3-a4b88b1d30de" containerName="neutron-httpd" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.621477 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="89663bcc-cc29-44ed-a65e-ab5e4efa7813" containerName="neutron-httpd" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.621493 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="89663bcc-cc29-44ed-a65e-ab5e4efa7813" containerName="neutron-api" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.622206 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.626954 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-pzrlt" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.627218 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.627364 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.632730 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.691847 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-openstack-config-secret\") pod \"openstackclient\" (UID: \"88a411c7-cbd7-4eda-a5c3-54eb86b4785b\") " pod="openstack/openstackclient" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.691954 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"88a411c7-cbd7-4eda-a5c3-54eb86b4785b\") " pod="openstack/openstackclient" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.692026 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-openstack-config\") pod \"openstackclient\" (UID: \"88a411c7-cbd7-4eda-a5c3-54eb86b4785b\") " pod="openstack/openstackclient" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.692073 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbp9r\" (UniqueName: \"kubernetes.io/projected/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-kube-api-access-gbp9r\") pod \"openstackclient\" (UID: \"88a411c7-cbd7-4eda-a5c3-54eb86b4785b\") " pod="openstack/openstackclient" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.793715 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbp9r\" (UniqueName: \"kubernetes.io/projected/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-kube-api-access-gbp9r\") pod \"openstackclient\" (UID: \"88a411c7-cbd7-4eda-a5c3-54eb86b4785b\") " pod="openstack/openstackclient" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.794162 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-openstack-config-secret\") pod \"openstackclient\" (UID: \"88a411c7-cbd7-4eda-a5c3-54eb86b4785b\") " pod="openstack/openstackclient" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.794213 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"88a411c7-cbd7-4eda-a5c3-54eb86b4785b\") " pod="openstack/openstackclient" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.794270 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-openstack-config\") pod \"openstackclient\" (UID: \"88a411c7-cbd7-4eda-a5c3-54eb86b4785b\") " pod="openstack/openstackclient" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.795137 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-openstack-config\") pod \"openstackclient\" (UID: \"88a411c7-cbd7-4eda-a5c3-54eb86b4785b\") " pod="openstack/openstackclient" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.799744 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-openstack-config-secret\") pod \"openstackclient\" (UID: \"88a411c7-cbd7-4eda-a5c3-54eb86b4785b\") " pod="openstack/openstackclient" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.800070 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"88a411c7-cbd7-4eda-a5c3-54eb86b4785b\") " pod="openstack/openstackclient" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.811432 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbp9r\" (UniqueName: \"kubernetes.io/projected/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-kube-api-access-gbp9r\") pod \"openstackclient\" (UID: \"88a411c7-cbd7-4eda-a5c3-54eb86b4785b\") " pod="openstack/openstackclient" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.918907 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.919581 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 04:06:15 crc kubenswrapper[4827]: I0131 04:06:15.931156 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.018931 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.020437 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.039108 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.099701 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e87305f-99c7-4ee4-9813-973bb0a259af-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1e87305f-99c7-4ee4-9813-973bb0a259af\") " pod="openstack/openstackclient" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.099766 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1e87305f-99c7-4ee4-9813-973bb0a259af-openstack-config\") pod \"openstackclient\" (UID: \"1e87305f-99c7-4ee4-9813-973bb0a259af\") " pod="openstack/openstackclient" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.099864 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5xd9\" (UniqueName: \"kubernetes.io/projected/1e87305f-99c7-4ee4-9813-973bb0a259af-kube-api-access-g5xd9\") pod \"openstackclient\" (UID: \"1e87305f-99c7-4ee4-9813-973bb0a259af\") " pod="openstack/openstackclient" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.100769 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1e87305f-99c7-4ee4-9813-973bb0a259af-openstack-config-secret\") pod \"openstackclient\" (UID: \"1e87305f-99c7-4ee4-9813-973bb0a259af\") " pod="openstack/openstackclient" Jan 31 04:06:16 crc kubenswrapper[4827]: E0131 04:06:16.106660 4827 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 31 04:06:16 crc kubenswrapper[4827]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_88a411c7-cbd7-4eda-a5c3-54eb86b4785b_0(55114068f15aefcc2919d1d73e3e53f032036f29aeeeeb12918de14174c5ecba): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"55114068f15aefcc2919d1d73e3e53f032036f29aeeeeb12918de14174c5ecba" Netns:"/var/run/netns/84f403d1-4101-47bd-9e53-7118903f4b4a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=55114068f15aefcc2919d1d73e3e53f032036f29aeeeeb12918de14174c5ecba;K8S_POD_UID=88a411c7-cbd7-4eda-a5c3-54eb86b4785b" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/88a411c7-cbd7-4eda-a5c3-54eb86b4785b]: expected pod UID "88a411c7-cbd7-4eda-a5c3-54eb86b4785b" but got "1e87305f-99c7-4ee4-9813-973bb0a259af" from Kube API Jan 31 04:06:16 crc kubenswrapper[4827]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 31 04:06:16 crc kubenswrapper[4827]: > Jan 31 04:06:16 crc kubenswrapper[4827]: E0131 04:06:16.106720 4827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 31 04:06:16 crc kubenswrapper[4827]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_88a411c7-cbd7-4eda-a5c3-54eb86b4785b_0(55114068f15aefcc2919d1d73e3e53f032036f29aeeeeb12918de14174c5ecba): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"55114068f15aefcc2919d1d73e3e53f032036f29aeeeeb12918de14174c5ecba" Netns:"/var/run/netns/84f403d1-4101-47bd-9e53-7118903f4b4a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=55114068f15aefcc2919d1d73e3e53f032036f29aeeeeb12918de14174c5ecba;K8S_POD_UID=88a411c7-cbd7-4eda-a5c3-54eb86b4785b" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/88a411c7-cbd7-4eda-a5c3-54eb86b4785b]: expected pod UID "88a411c7-cbd7-4eda-a5c3-54eb86b4785b" but got "1e87305f-99c7-4ee4-9813-973bb0a259af" from Kube API Jan 31 04:06:16 crc kubenswrapper[4827]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 31 04:06:16 crc kubenswrapper[4827]: > pod="openstack/openstackclient" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.201993 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e87305f-99c7-4ee4-9813-973bb0a259af-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1e87305f-99c7-4ee4-9813-973bb0a259af\") " pod="openstack/openstackclient" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.202224 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1e87305f-99c7-4ee4-9813-973bb0a259af-openstack-config\") pod \"openstackclient\" (UID: \"1e87305f-99c7-4ee4-9813-973bb0a259af\") " pod="openstack/openstackclient" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.202414 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5xd9\" (UniqueName: \"kubernetes.io/projected/1e87305f-99c7-4ee4-9813-973bb0a259af-kube-api-access-g5xd9\") pod \"openstackclient\" (UID: \"1e87305f-99c7-4ee4-9813-973bb0a259af\") " pod="openstack/openstackclient" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.202446 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1e87305f-99c7-4ee4-9813-973bb0a259af-openstack-config-secret\") pod \"openstackclient\" (UID: \"1e87305f-99c7-4ee4-9813-973bb0a259af\") " pod="openstack/openstackclient" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.203204 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1e87305f-99c7-4ee4-9813-973bb0a259af-openstack-config\") pod \"openstackclient\" (UID: \"1e87305f-99c7-4ee4-9813-973bb0a259af\") " pod="openstack/openstackclient" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.208483 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e87305f-99c7-4ee4-9813-973bb0a259af-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1e87305f-99c7-4ee4-9813-973bb0a259af\") " pod="openstack/openstackclient" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.208748 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1e87305f-99c7-4ee4-9813-973bb0a259af-openstack-config-secret\") pod \"openstackclient\" (UID: \"1e87305f-99c7-4ee4-9813-973bb0a259af\") " pod="openstack/openstackclient" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.221400 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5xd9\" (UniqueName: \"kubernetes.io/projected/1e87305f-99c7-4ee4-9813-973bb0a259af-kube-api-access-g5xd9\") pod \"openstackclient\" (UID: \"1e87305f-99c7-4ee4-9813-973bb0a259af\") " pod="openstack/openstackclient" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.374666 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.740851 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.750287 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.756493 4827 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="88a411c7-cbd7-4eda-a5c3-54eb86b4785b" podUID="1e87305f-99c7-4ee4-9813-973bb0a259af" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.814562 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbp9r\" (UniqueName: \"kubernetes.io/projected/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-kube-api-access-gbp9r\") pod \"88a411c7-cbd7-4eda-a5c3-54eb86b4785b\" (UID: \"88a411c7-cbd7-4eda-a5c3-54eb86b4785b\") " Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.814650 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-combined-ca-bundle\") pod \"88a411c7-cbd7-4eda-a5c3-54eb86b4785b\" (UID: \"88a411c7-cbd7-4eda-a5c3-54eb86b4785b\") " Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.814762 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-openstack-config\") pod \"88a411c7-cbd7-4eda-a5c3-54eb86b4785b\" (UID: \"88a411c7-cbd7-4eda-a5c3-54eb86b4785b\") " Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.814867 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-openstack-config-secret\") pod \"88a411c7-cbd7-4eda-a5c3-54eb86b4785b\" (UID: \"88a411c7-cbd7-4eda-a5c3-54eb86b4785b\") " Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.815704 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "88a411c7-cbd7-4eda-a5c3-54eb86b4785b" (UID: "88a411c7-cbd7-4eda-a5c3-54eb86b4785b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.823161 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "88a411c7-cbd7-4eda-a5c3-54eb86b4785b" (UID: "88a411c7-cbd7-4eda-a5c3-54eb86b4785b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.823197 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-kube-api-access-gbp9r" (OuterVolumeSpecName: "kube-api-access-gbp9r") pod "88a411c7-cbd7-4eda-a5c3-54eb86b4785b" (UID: "88a411c7-cbd7-4eda-a5c3-54eb86b4785b"). InnerVolumeSpecName "kube-api-access-gbp9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.823200 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88a411c7-cbd7-4eda-a5c3-54eb86b4785b" (UID: "88a411c7-cbd7-4eda-a5c3-54eb86b4785b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:16 crc kubenswrapper[4827]: W0131 04:06:16.897409 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e87305f_99c7_4ee4_9813_973bb0a259af.slice/crio-9dfeea6c07e328b6386e19005e06bd5995df1b59907d4609856d775d2cfb7282 WatchSource:0}: Error finding container 9dfeea6c07e328b6386e19005e06bd5995df1b59907d4609856d775d2cfb7282: Status 404 returned error can't find the container with id 9dfeea6c07e328b6386e19005e06bd5995df1b59907d4609856d775d2cfb7282 Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.899165 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.917129 4827 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.917173 4827 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.917190 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbp9r\" (UniqueName: \"kubernetes.io/projected/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-kube-api-access-gbp9r\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:16 crc kubenswrapper[4827]: I0131 04:06:16.917202 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a411c7-cbd7-4eda-a5c3-54eb86b4785b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:17 crc kubenswrapper[4827]: I0131 04:06:17.277656 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 31 04:06:17 crc kubenswrapper[4827]: I0131 04:06:17.749122 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1e87305f-99c7-4ee4-9813-973bb0a259af","Type":"ContainerStarted","Data":"9dfeea6c07e328b6386e19005e06bd5995df1b59907d4609856d775d2cfb7282"} Jan 31 04:06:17 crc kubenswrapper[4827]: I0131 04:06:17.749162 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 04:06:17 crc kubenswrapper[4827]: I0131 04:06:17.764591 4827 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="88a411c7-cbd7-4eda-a5c3-54eb86b4785b" podUID="1e87305f-99c7-4ee4-9813-973bb0a259af" Jan 31 04:06:18 crc kubenswrapper[4827]: I0131 04:06:18.127744 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88a411c7-cbd7-4eda-a5c3-54eb86b4785b" path="/var/lib/kubelet/pods/88a411c7-cbd7-4eda-a5c3-54eb86b4785b/volumes" Jan 31 04:06:24 crc kubenswrapper[4827]: I0131 04:06:24.601275 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:24 crc kubenswrapper[4827]: I0131 04:06:24.602076 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e0967c2-b905-47da-a4ce-f632cad7714f" containerName="ceilometer-central-agent" containerID="cri-o://816d71c6bc62b63f11dfd241fba24b42ef9312284feb160a67a4456c1092e7d0" gracePeriod=30 Jan 31 04:06:24 crc kubenswrapper[4827]: I0131 04:06:24.602202 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e0967c2-b905-47da-a4ce-f632cad7714f" containerName="proxy-httpd" containerID="cri-o://f3ab2d09bb1e4d10b4e7cee1973c607fe01dea946a84dcaf9e5f9754253fced9" gracePeriod=30 Jan 31 04:06:24 crc kubenswrapper[4827]: I0131 04:06:24.602351 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e0967c2-b905-47da-a4ce-f632cad7714f" containerName="ceilometer-notification-agent" containerID="cri-o://f4315b5d8dd768f05ef444a627bd1b6fc8f5a58e18f578dc8b1bfa967a848ea8" gracePeriod=30 Jan 31 04:06:24 crc kubenswrapper[4827]: I0131 04:06:24.602588 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e0967c2-b905-47da-a4ce-f632cad7714f" containerName="sg-core" containerID="cri-o://1fa65c35c8dab7f806c337af95e837bfc8b79c571793317c36c879667224f014" gracePeriod=30 Jan 31 04:06:24 crc kubenswrapper[4827]: I0131 04:06:24.618169 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9e0967c2-b905-47da-a4ce-f632cad7714f" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.157:3000/\": EOF" Jan 31 04:06:24 crc kubenswrapper[4827]: I0131 04:06:24.831952 4827 generic.go:334] "Generic (PLEG): container finished" podID="9e0967c2-b905-47da-a4ce-f632cad7714f" containerID="f3ab2d09bb1e4d10b4e7cee1973c607fe01dea946a84dcaf9e5f9754253fced9" exitCode=0 Jan 31 04:06:24 crc kubenswrapper[4827]: I0131 04:06:24.831991 4827 generic.go:334] "Generic (PLEG): container finished" podID="9e0967c2-b905-47da-a4ce-f632cad7714f" containerID="1fa65c35c8dab7f806c337af95e837bfc8b79c571793317c36c879667224f014" exitCode=2 Jan 31 04:06:24 crc kubenswrapper[4827]: I0131 04:06:24.832045 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e0967c2-b905-47da-a4ce-f632cad7714f","Type":"ContainerDied","Data":"f3ab2d09bb1e4d10b4e7cee1973c607fe01dea946a84dcaf9e5f9754253fced9"} Jan 31 04:06:24 crc kubenswrapper[4827]: I0131 04:06:24.832131 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e0967c2-b905-47da-a4ce-f632cad7714f","Type":"ContainerDied","Data":"1fa65c35c8dab7f806c337af95e837bfc8b79c571793317c36c879667224f014"} Jan 31 04:06:25 crc kubenswrapper[4827]: I0131 04:06:25.841377 4827 generic.go:334] "Generic (PLEG): container finished" podID="9e0967c2-b905-47da-a4ce-f632cad7714f" containerID="f4315b5d8dd768f05ef444a627bd1b6fc8f5a58e18f578dc8b1bfa967a848ea8" exitCode=0 Jan 31 04:06:25 crc kubenswrapper[4827]: I0131 04:06:25.841641 4827 generic.go:334] "Generic (PLEG): container finished" podID="9e0967c2-b905-47da-a4ce-f632cad7714f" containerID="816d71c6bc62b63f11dfd241fba24b42ef9312284feb160a67a4456c1092e7d0" exitCode=0 Jan 31 04:06:25 crc kubenswrapper[4827]: I0131 04:06:25.841440 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e0967c2-b905-47da-a4ce-f632cad7714f","Type":"ContainerDied","Data":"f4315b5d8dd768f05ef444a627bd1b6fc8f5a58e18f578dc8b1bfa967a848ea8"} Jan 31 04:06:25 crc kubenswrapper[4827]: I0131 04:06:25.841679 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e0967c2-b905-47da-a4ce-f632cad7714f","Type":"ContainerDied","Data":"816d71c6bc62b63f11dfd241fba24b42ef9312284feb160a67a4456c1092e7d0"} Jan 31 04:06:26 crc kubenswrapper[4827]: I0131 04:06:26.851866 4827 generic.go:334] "Generic (PLEG): container finished" podID="e9e0dca6-8ec6-4124-82e4-69eaac1da0af" containerID="b423735e4c610cc32e9de46840e30dd87ff2b9bda0c3f3a6f66bc3c347c79d8a" exitCode=137 Jan 31 04:06:26 crc kubenswrapper[4827]: I0131 04:06:26.852293 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e9e0dca6-8ec6-4124-82e4-69eaac1da0af","Type":"ContainerDied","Data":"b423735e4c610cc32e9de46840e30dd87ff2b9bda0c3f3a6f66bc3c347c79d8a"} Jan 31 04:06:26 crc kubenswrapper[4827]: I0131 04:06:26.871397 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 04:06:26 crc kubenswrapper[4827]: I0131 04:06:26.998245 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-config-data\") pod \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " Jan 31 04:06:26 crc kubenswrapper[4827]: I0131 04:06:26.998367 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-logs\") pod \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " Jan 31 04:06:26 crc kubenswrapper[4827]: I0131 04:06:26.998418 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddqz2\" (UniqueName: \"kubernetes.io/projected/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-kube-api-access-ddqz2\") pod \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " Jan 31 04:06:26 crc kubenswrapper[4827]: I0131 04:06:26.998450 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-combined-ca-bundle\") pod \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " Jan 31 04:06:26 crc kubenswrapper[4827]: I0131 04:06:26.998512 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-config-data-custom\") pod \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " Jan 31 04:06:26 crc kubenswrapper[4827]: I0131 04:06:26.998563 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-scripts\") pod \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " Jan 31 04:06:26 crc kubenswrapper[4827]: I0131 04:06:26.998598 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-etc-machine-id\") pod \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\" (UID: \"e9e0dca6-8ec6-4124-82e4-69eaac1da0af\") " Jan 31 04:06:26 crc kubenswrapper[4827]: I0131 04:06:26.998987 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e9e0dca6-8ec6-4124-82e4-69eaac1da0af" (UID: "e9e0dca6-8ec6-4124-82e4-69eaac1da0af"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:26.999813 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-logs" (OuterVolumeSpecName: "logs") pod "e9e0dca6-8ec6-4124-82e4-69eaac1da0af" (UID: "e9e0dca6-8ec6-4124-82e4-69eaac1da0af"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.004523 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-scripts" (OuterVolumeSpecName: "scripts") pod "e9e0dca6-8ec6-4124-82e4-69eaac1da0af" (UID: "e9e0dca6-8ec6-4124-82e4-69eaac1da0af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.011108 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e9e0dca6-8ec6-4124-82e4-69eaac1da0af" (UID: "e9e0dca6-8ec6-4124-82e4-69eaac1da0af"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.011186 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-kube-api-access-ddqz2" (OuterVolumeSpecName: "kube-api-access-ddqz2") pod "e9e0dca6-8ec6-4124-82e4-69eaac1da0af" (UID: "e9e0dca6-8ec6-4124-82e4-69eaac1da0af"). InnerVolumeSpecName "kube-api-access-ddqz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.016181 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.043486 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9e0dca6-8ec6-4124-82e4-69eaac1da0af" (UID: "e9e0dca6-8ec6-4124-82e4-69eaac1da0af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.056637 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-config-data" (OuterVolumeSpecName: "config-data") pod "e9e0dca6-8ec6-4124-82e4-69eaac1da0af" (UID: "e9e0dca6-8ec6-4124-82e4-69eaac1da0af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.100276 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-combined-ca-bundle\") pod \"9e0967c2-b905-47da-a4ce-f632cad7714f\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.100342 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e0967c2-b905-47da-a4ce-f632cad7714f-log-httpd\") pod \"9e0967c2-b905-47da-a4ce-f632cad7714f\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.100386 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-config-data\") pod \"9e0967c2-b905-47da-a4ce-f632cad7714f\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.100452 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-sg-core-conf-yaml\") pod \"9e0967c2-b905-47da-a4ce-f632cad7714f\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.100478 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e0967c2-b905-47da-a4ce-f632cad7714f-run-httpd\") pod \"9e0967c2-b905-47da-a4ce-f632cad7714f\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.100518 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlpr6\" (UniqueName: \"kubernetes.io/projected/9e0967c2-b905-47da-a4ce-f632cad7714f-kube-api-access-rlpr6\") pod \"9e0967c2-b905-47da-a4ce-f632cad7714f\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.100576 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-scripts\") pod \"9e0967c2-b905-47da-a4ce-f632cad7714f\" (UID: \"9e0967c2-b905-47da-a4ce-f632cad7714f\") " Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.100938 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddqz2\" (UniqueName: \"kubernetes.io/projected/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-kube-api-access-ddqz2\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.100954 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.100963 4827 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.100972 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.100980 4827 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.100990 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.100998 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9e0dca6-8ec6-4124-82e4-69eaac1da0af-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.102001 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e0967c2-b905-47da-a4ce-f632cad7714f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9e0967c2-b905-47da-a4ce-f632cad7714f" (UID: "9e0967c2-b905-47da-a4ce-f632cad7714f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.102036 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e0967c2-b905-47da-a4ce-f632cad7714f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9e0967c2-b905-47da-a4ce-f632cad7714f" (UID: "9e0967c2-b905-47da-a4ce-f632cad7714f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.107001 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-scripts" (OuterVolumeSpecName: "scripts") pod "9e0967c2-b905-47da-a4ce-f632cad7714f" (UID: "9e0967c2-b905-47da-a4ce-f632cad7714f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.107023 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e0967c2-b905-47da-a4ce-f632cad7714f-kube-api-access-rlpr6" (OuterVolumeSpecName: "kube-api-access-rlpr6") pod "9e0967c2-b905-47da-a4ce-f632cad7714f" (UID: "9e0967c2-b905-47da-a4ce-f632cad7714f"). InnerVolumeSpecName "kube-api-access-rlpr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.125482 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9e0967c2-b905-47da-a4ce-f632cad7714f" (UID: "9e0967c2-b905-47da-a4ce-f632cad7714f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.176082 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e0967c2-b905-47da-a4ce-f632cad7714f" (UID: "9e0967c2-b905-47da-a4ce-f632cad7714f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.184572 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-config-data" (OuterVolumeSpecName: "config-data") pod "9e0967c2-b905-47da-a4ce-f632cad7714f" (UID: "9e0967c2-b905-47da-a4ce-f632cad7714f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.202345 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.202374 4827 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e0967c2-b905-47da-a4ce-f632cad7714f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.202385 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.202393 4827 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.202401 4827 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e0967c2-b905-47da-a4ce-f632cad7714f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.202410 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlpr6\" (UniqueName: \"kubernetes.io/projected/9e0967c2-b905-47da-a4ce-f632cad7714f-kube-api-access-rlpr6\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.202420 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e0967c2-b905-47da-a4ce-f632cad7714f-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.865702 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e9e0dca6-8ec6-4124-82e4-69eaac1da0af","Type":"ContainerDied","Data":"d55f1f29ab725ea1d019d73110a36e4bec01bdaa2c3287a3fe00b0f1be517751"} Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.866055 4827 scope.go:117] "RemoveContainer" containerID="b423735e4c610cc32e9de46840e30dd87ff2b9bda0c3f3a6f66bc3c347c79d8a" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.865749 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.873793 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e0967c2-b905-47da-a4ce-f632cad7714f","Type":"ContainerDied","Data":"d5b3390b4beed5007bc2436fa7fa1eaa65b58a4754d6c58d619fa8ac2e0b6699"} Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.873925 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.876401 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1e87305f-99c7-4ee4-9813-973bb0a259af","Type":"ContainerStarted","Data":"f9d8b7240dd1dcd8bf86c224ed14641c44af376069bc06359f40540f74f2befc"} Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.908926 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.187452205 podStartE2EDuration="12.908891032s" podCreationTimestamp="2026-01-31 04:06:15 +0000 UTC" firstStartedPulling="2026-01-31 04:06:16.899528252 +0000 UTC m=+1169.586608701" lastFinishedPulling="2026-01-31 04:06:26.620967079 +0000 UTC m=+1179.308047528" observedRunningTime="2026-01-31 04:06:27.90000654 +0000 UTC m=+1180.587087009" watchObservedRunningTime="2026-01-31 04:06:27.908891032 +0000 UTC m=+1180.595971481" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.915139 4827 scope.go:117] "RemoveContainer" containerID="97995bd0ff94c6e7db6f3bdbafdf9a64eda66d05a75b00ce18a7c338d12b95c5" Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.929840 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.951267 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.973405 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:27 crc kubenswrapper[4827]: I0131 04:06:27.998265 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.006347 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 31 04:06:28 crc kubenswrapper[4827]: E0131 04:06:28.006720 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0967c2-b905-47da-a4ce-f632cad7714f" containerName="proxy-httpd" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.006742 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0967c2-b905-47da-a4ce-f632cad7714f" containerName="proxy-httpd" Jan 31 04:06:28 crc kubenswrapper[4827]: E0131 04:06:28.006760 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0967c2-b905-47da-a4ce-f632cad7714f" containerName="ceilometer-central-agent" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.006769 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0967c2-b905-47da-a4ce-f632cad7714f" containerName="ceilometer-central-agent" Jan 31 04:06:28 crc kubenswrapper[4827]: E0131 04:06:28.006784 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9e0dca6-8ec6-4124-82e4-69eaac1da0af" containerName="cinder-api" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.006792 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e0dca6-8ec6-4124-82e4-69eaac1da0af" containerName="cinder-api" Jan 31 04:06:28 crc kubenswrapper[4827]: E0131 04:06:28.006812 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9e0dca6-8ec6-4124-82e4-69eaac1da0af" containerName="cinder-api-log" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.006818 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e0dca6-8ec6-4124-82e4-69eaac1da0af" containerName="cinder-api-log" Jan 31 04:06:28 crc kubenswrapper[4827]: E0131 04:06:28.006836 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0967c2-b905-47da-a4ce-f632cad7714f" containerName="ceilometer-notification-agent" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.006843 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0967c2-b905-47da-a4ce-f632cad7714f" containerName="ceilometer-notification-agent" Jan 31 04:06:28 crc kubenswrapper[4827]: E0131 04:06:28.006852 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0967c2-b905-47da-a4ce-f632cad7714f" containerName="sg-core" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.006859 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0967c2-b905-47da-a4ce-f632cad7714f" containerName="sg-core" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.007058 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e0967c2-b905-47da-a4ce-f632cad7714f" containerName="ceilometer-notification-agent" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.007114 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9e0dca6-8ec6-4124-82e4-69eaac1da0af" containerName="cinder-api" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.007150 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9e0dca6-8ec6-4124-82e4-69eaac1da0af" containerName="cinder-api-log" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.007161 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e0967c2-b905-47da-a4ce-f632cad7714f" containerName="sg-core" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.007171 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e0967c2-b905-47da-a4ce-f632cad7714f" containerName="proxy-httpd" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.007178 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e0967c2-b905-47da-a4ce-f632cad7714f" containerName="ceilometer-central-agent" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.008741 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.010378 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.011561 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.011825 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.014064 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.018205 4827 scope.go:117] "RemoveContainer" containerID="f3ab2d09bb1e4d10b4e7cee1973c607fe01dea946a84dcaf9e5f9754253fced9" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.020509 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.023150 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.027053 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.027561 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.029027 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.043643 4827 scope.go:117] "RemoveContainer" containerID="1fa65c35c8dab7f806c337af95e837bfc8b79c571793317c36c879667224f014" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.064769 4827 scope.go:117] "RemoveContainer" containerID="f4315b5d8dd768f05ef444a627bd1b6fc8f5a58e18f578dc8b1bfa967a848ea8" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.093749 4827 scope.go:117] "RemoveContainer" containerID="816d71c6bc62b63f11dfd241fba24b42ef9312284feb160a67a4456c1092e7d0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.127186 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.127246 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-scripts\") pod \"ceilometer-0\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.127279 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-config-data\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.127310 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.127341 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.127369 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.127407 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ctjj\" (UniqueName: \"kubernetes.io/projected/2a0f43cf-149a-40c5-a67b-c34d251cb738-kube-api-access-5ctjj\") pod \"ceilometer-0\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.127445 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-scripts\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.127478 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.127515 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-logs\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.127539 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a0f43cf-149a-40c5-a67b-c34d251cb738-log-httpd\") pod \"ceilometer-0\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.127562 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a0f43cf-149a-40c5-a67b-c34d251cb738-run-httpd\") pod \"ceilometer-0\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.127592 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-config-data-custom\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.127628 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.127649 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-config-data\") pod \"ceilometer-0\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.132419 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gjn8\" (UniqueName: \"kubernetes.io/projected/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-kube-api-access-4gjn8\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.137219 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e0967c2-b905-47da-a4ce-f632cad7714f" path="/var/lib/kubelet/pods/9e0967c2-b905-47da-a4ce-f632cad7714f/volumes" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.138651 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9e0dca6-8ec6-4124-82e4-69eaac1da0af" path="/var/lib/kubelet/pods/e9e0dca6-8ec6-4124-82e4-69eaac1da0af/volumes" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.203366 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:28 crc kubenswrapper[4827]: E0131 04:06:28.204687 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-5ctjj log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="2a0f43cf-149a-40c5-a67b-c34d251cb738" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.234917 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-config-data-custom\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.235143 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.235802 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-config-data\") pod \"ceilometer-0\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.235955 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gjn8\" (UniqueName: \"kubernetes.io/projected/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-kube-api-access-4gjn8\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.236685 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.236776 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-scripts\") pod \"ceilometer-0\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.236858 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-config-data\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.236971 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.242388 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.239600 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-config-data\") pod \"ceilometer-0\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.239905 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.241966 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.241988 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.242603 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-scripts\") pod \"ceilometer-0\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.239681 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-config-data-custom\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.242487 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.243185 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ctjj\" (UniqueName: \"kubernetes.io/projected/2a0f43cf-149a-40c5-a67b-c34d251cb738-kube-api-access-5ctjj\") pod \"ceilometer-0\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.244205 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-scripts\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.244421 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.244578 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-logs\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.244656 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a0f43cf-149a-40c5-a67b-c34d251cb738-log-httpd\") pod \"ceilometer-0\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.244749 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a0f43cf-149a-40c5-a67b-c34d251cb738-run-httpd\") pod \"ceilometer-0\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.245307 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a0f43cf-149a-40c5-a67b-c34d251cb738-run-httpd\") pod \"ceilometer-0\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.245575 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-logs\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.245626 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-config-data\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.245776 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a0f43cf-149a-40c5-a67b-c34d251cb738-log-httpd\") pod \"ceilometer-0\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.246927 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-scripts\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.247497 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.248473 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.250047 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.251723 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gjn8\" (UniqueName: \"kubernetes.io/projected/5dc884e0-8eda-432c-a19f-2f1f4202ed2f-kube-api-access-4gjn8\") pod \"cinder-api-0\" (UID: \"5dc884e0-8eda-432c-a19f-2f1f4202ed2f\") " pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.256335 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ctjj\" (UniqueName: \"kubernetes.io/projected/2a0f43cf-149a-40c5-a67b-c34d251cb738-kube-api-access-5ctjj\") pod \"ceilometer-0\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.345282 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.772753 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.889022 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5dc884e0-8eda-432c-a19f-2f1f4202ed2f","Type":"ContainerStarted","Data":"5de30b416a685b14f840dcc76107370ca8113a1ebd8c0321b2e34f74108e225b"} Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.892537 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.901838 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.957630 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ctjj\" (UniqueName: \"kubernetes.io/projected/2a0f43cf-149a-40c5-a67b-c34d251cb738-kube-api-access-5ctjj\") pod \"2a0f43cf-149a-40c5-a67b-c34d251cb738\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.957984 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a0f43cf-149a-40c5-a67b-c34d251cb738-log-httpd\") pod \"2a0f43cf-149a-40c5-a67b-c34d251cb738\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.958067 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-config-data\") pod \"2a0f43cf-149a-40c5-a67b-c34d251cb738\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.958142 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-sg-core-conf-yaml\") pod \"2a0f43cf-149a-40c5-a67b-c34d251cb738\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.958196 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-scripts\") pod \"2a0f43cf-149a-40c5-a67b-c34d251cb738\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.958246 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a0f43cf-149a-40c5-a67b-c34d251cb738-run-httpd\") pod \"2a0f43cf-149a-40c5-a67b-c34d251cb738\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.958521 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-combined-ca-bundle\") pod \"2a0f43cf-149a-40c5-a67b-c34d251cb738\" (UID: \"2a0f43cf-149a-40c5-a67b-c34d251cb738\") " Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.958727 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a0f43cf-149a-40c5-a67b-c34d251cb738-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2a0f43cf-149a-40c5-a67b-c34d251cb738" (UID: "2a0f43cf-149a-40c5-a67b-c34d251cb738"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.958871 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a0f43cf-149a-40c5-a67b-c34d251cb738-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2a0f43cf-149a-40c5-a67b-c34d251cb738" (UID: "2a0f43cf-149a-40c5-a67b-c34d251cb738"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.959646 4827 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a0f43cf-149a-40c5-a67b-c34d251cb738-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.959674 4827 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a0f43cf-149a-40c5-a67b-c34d251cb738-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.977257 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-scripts" (OuterVolumeSpecName: "scripts") pod "2a0f43cf-149a-40c5-a67b-c34d251cb738" (UID: "2a0f43cf-149a-40c5-a67b-c34d251cb738"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.977294 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a0f43cf-149a-40c5-a67b-c34d251cb738-kube-api-access-5ctjj" (OuterVolumeSpecName: "kube-api-access-5ctjj") pod "2a0f43cf-149a-40c5-a67b-c34d251cb738" (UID: "2a0f43cf-149a-40c5-a67b-c34d251cb738"). InnerVolumeSpecName "kube-api-access-5ctjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.977449 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a0f43cf-149a-40c5-a67b-c34d251cb738" (UID: "2a0f43cf-149a-40c5-a67b-c34d251cb738"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.977969 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2a0f43cf-149a-40c5-a67b-c34d251cb738" (UID: "2a0f43cf-149a-40c5-a67b-c34d251cb738"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:28 crc kubenswrapper[4827]: I0131 04:06:28.978012 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-config-data" (OuterVolumeSpecName: "config-data") pod "2a0f43cf-149a-40c5-a67b-c34d251cb738" (UID: "2a0f43cf-149a-40c5-a67b-c34d251cb738"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:29 crc kubenswrapper[4827]: I0131 04:06:29.060970 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:29 crc kubenswrapper[4827]: I0131 04:06:29.061002 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ctjj\" (UniqueName: \"kubernetes.io/projected/2a0f43cf-149a-40c5-a67b-c34d251cb738-kube-api-access-5ctjj\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:29 crc kubenswrapper[4827]: I0131 04:06:29.061013 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:29 crc kubenswrapper[4827]: I0131 04:06:29.061021 4827 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:29 crc kubenswrapper[4827]: I0131 04:06:29.061029 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a0f43cf-149a-40c5-a67b-c34d251cb738-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:29 crc kubenswrapper[4827]: I0131 04:06:29.919276 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5dc884e0-8eda-432c-a19f-2f1f4202ed2f","Type":"ContainerStarted","Data":"864469188aef207834975e3e7003a71f9b2349c997bc1d166d83283df6b603b8"} Jan 31 04:06:29 crc kubenswrapper[4827]: I0131 04:06:29.919300 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:06:29 crc kubenswrapper[4827]: I0131 04:06:29.988595 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.006920 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.015627 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.017750 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.020160 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.020387 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.022312 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.078051 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhppt\" (UniqueName: \"kubernetes.io/projected/069a86e2-2b0b-492f-af82-1b055b22fde2-kube-api-access-dhppt\") pod \"ceilometer-0\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.078105 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-scripts\") pod \"ceilometer-0\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.078135 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.078372 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-config-data\") pod \"ceilometer-0\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.079220 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.079283 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/069a86e2-2b0b-492f-af82-1b055b22fde2-log-httpd\") pod \"ceilometer-0\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.079584 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/069a86e2-2b0b-492f-af82-1b055b22fde2-run-httpd\") pod \"ceilometer-0\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.131405 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a0f43cf-149a-40c5-a67b-c34d251cb738" path="/var/lib/kubelet/pods/2a0f43cf-149a-40c5-a67b-c34d251cb738/volumes" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.131770 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:30 crc kubenswrapper[4827]: E0131 04:06:30.132205 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-dhppt log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="069a86e2-2b0b-492f-af82-1b055b22fde2" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.181307 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-scripts\") pod \"ceilometer-0\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.181396 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.181506 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-config-data\") pod \"ceilometer-0\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.181576 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.182414 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/069a86e2-2b0b-492f-af82-1b055b22fde2-log-httpd\") pod \"ceilometer-0\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.182485 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/069a86e2-2b0b-492f-af82-1b055b22fde2-run-httpd\") pod \"ceilometer-0\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.182516 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/069a86e2-2b0b-492f-af82-1b055b22fde2-log-httpd\") pod \"ceilometer-0\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.182617 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhppt\" (UniqueName: \"kubernetes.io/projected/069a86e2-2b0b-492f-af82-1b055b22fde2-kube-api-access-dhppt\") pod \"ceilometer-0\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.183131 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/069a86e2-2b0b-492f-af82-1b055b22fde2-run-httpd\") pod \"ceilometer-0\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.186518 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-config-data\") pod \"ceilometer-0\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.190374 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.191892 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-scripts\") pod \"ceilometer-0\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.192935 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.213655 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhppt\" (UniqueName: \"kubernetes.io/projected/069a86e2-2b0b-492f-af82-1b055b22fde2-kube-api-access-dhppt\") pod \"ceilometer-0\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.949954 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.949947 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5dc884e0-8eda-432c-a19f-2f1f4202ed2f","Type":"ContainerStarted","Data":"ed3c3310f69fd39ba3030cde37440e5258bba124e3096f2e9ac45f59841d01d7"} Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.950559 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.977096 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:06:30 crc kubenswrapper[4827]: I0131 04:06:30.978415 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.978401003 podStartE2EDuration="3.978401003s" podCreationTimestamp="2026-01-31 04:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:06:30.97443102 +0000 UTC m=+1183.661511479" watchObservedRunningTime="2026-01-31 04:06:30.978401003 +0000 UTC m=+1183.665481452" Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.097766 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/069a86e2-2b0b-492f-af82-1b055b22fde2-run-httpd\") pod \"069a86e2-2b0b-492f-af82-1b055b22fde2\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.097842 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhppt\" (UniqueName: \"kubernetes.io/projected/069a86e2-2b0b-492f-af82-1b055b22fde2-kube-api-access-dhppt\") pod \"069a86e2-2b0b-492f-af82-1b055b22fde2\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.097892 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/069a86e2-2b0b-492f-af82-1b055b22fde2-log-httpd\") pod \"069a86e2-2b0b-492f-af82-1b055b22fde2\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.097930 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-sg-core-conf-yaml\") pod \"069a86e2-2b0b-492f-af82-1b055b22fde2\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.098017 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-combined-ca-bundle\") pod \"069a86e2-2b0b-492f-af82-1b055b22fde2\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.098042 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-config-data\") pod \"069a86e2-2b0b-492f-af82-1b055b22fde2\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.098169 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-scripts\") pod \"069a86e2-2b0b-492f-af82-1b055b22fde2\" (UID: \"069a86e2-2b0b-492f-af82-1b055b22fde2\") " Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.098199 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/069a86e2-2b0b-492f-af82-1b055b22fde2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "069a86e2-2b0b-492f-af82-1b055b22fde2" (UID: "069a86e2-2b0b-492f-af82-1b055b22fde2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.098374 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/069a86e2-2b0b-492f-af82-1b055b22fde2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "069a86e2-2b0b-492f-af82-1b055b22fde2" (UID: "069a86e2-2b0b-492f-af82-1b055b22fde2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.098708 4827 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/069a86e2-2b0b-492f-af82-1b055b22fde2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.098732 4827 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/069a86e2-2b0b-492f-af82-1b055b22fde2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.101714 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/069a86e2-2b0b-492f-af82-1b055b22fde2-kube-api-access-dhppt" (OuterVolumeSpecName: "kube-api-access-dhppt") pod "069a86e2-2b0b-492f-af82-1b055b22fde2" (UID: "069a86e2-2b0b-492f-af82-1b055b22fde2"). InnerVolumeSpecName "kube-api-access-dhppt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.102316 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-scripts" (OuterVolumeSpecName: "scripts") pod "069a86e2-2b0b-492f-af82-1b055b22fde2" (UID: "069a86e2-2b0b-492f-af82-1b055b22fde2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.107193 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "069a86e2-2b0b-492f-af82-1b055b22fde2" (UID: "069a86e2-2b0b-492f-af82-1b055b22fde2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.107209 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "069a86e2-2b0b-492f-af82-1b055b22fde2" (UID: "069a86e2-2b0b-492f-af82-1b055b22fde2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.112516 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-config-data" (OuterVolumeSpecName: "config-data") pod "069a86e2-2b0b-492f-af82-1b055b22fde2" (UID: "069a86e2-2b0b-492f-af82-1b055b22fde2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.200923 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhppt\" (UniqueName: \"kubernetes.io/projected/069a86e2-2b0b-492f-af82-1b055b22fde2-kube-api-access-dhppt\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.200969 4827 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.200982 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.200995 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.201008 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/069a86e2-2b0b-492f-af82-1b055b22fde2-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:31 crc kubenswrapper[4827]: I0131 04:06:31.955748 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.005605 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.016897 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.036516 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.038701 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.041339 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.041635 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.063077 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.116285 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.116340 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-config-data\") pod \"ceilometer-0\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.116367 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.116474 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-log-httpd\") pod \"ceilometer-0\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.116552 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-run-httpd\") pod \"ceilometer-0\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.116641 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfs56\" (UniqueName: \"kubernetes.io/projected/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-kube-api-access-dfs56\") pod \"ceilometer-0\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.116669 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-scripts\") pod \"ceilometer-0\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.123107 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="069a86e2-2b0b-492f-af82-1b055b22fde2" path="/var/lib/kubelet/pods/069a86e2-2b0b-492f-af82-1b055b22fde2/volumes" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.218512 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfs56\" (UniqueName: \"kubernetes.io/projected/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-kube-api-access-dfs56\") pod \"ceilometer-0\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.218561 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-scripts\") pod \"ceilometer-0\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.218612 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.218636 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-config-data\") pod \"ceilometer-0\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.218650 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.218693 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-log-httpd\") pod \"ceilometer-0\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.218730 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-run-httpd\") pod \"ceilometer-0\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.219211 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-run-httpd\") pod \"ceilometer-0\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.219326 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-log-httpd\") pod \"ceilometer-0\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.230945 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.231086 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-scripts\") pod \"ceilometer-0\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.231704 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.234050 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfs56\" (UniqueName: \"kubernetes.io/projected/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-kube-api-access-dfs56\") pod \"ceilometer-0\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.243807 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-config-data\") pod \"ceilometer-0\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.365219 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.824483 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:32 crc kubenswrapper[4827]: W0131 04:06:32.830279 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b0acb78_5ef0_41e1_8d1a_1f4329c6e07a.slice/crio-e95f356b558d4fb3da0b06ea6b45a87ef991a0f7fadf7c813bf99397e83f2f7e WatchSource:0}: Error finding container e95f356b558d4fb3da0b06ea6b45a87ef991a0f7fadf7c813bf99397e83f2f7e: Status 404 returned error can't find the container with id e95f356b558d4fb3da0b06ea6b45a87ef991a0f7fadf7c813bf99397e83f2f7e Jan 31 04:06:32 crc kubenswrapper[4827]: I0131 04:06:32.967523 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a","Type":"ContainerStarted","Data":"e95f356b558d4fb3da0b06ea6b45a87ef991a0f7fadf7c813bf99397e83f2f7e"} Jan 31 04:06:33 crc kubenswrapper[4827]: I0131 04:06:33.976745 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a","Type":"ContainerStarted","Data":"06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e"} Jan 31 04:06:34 crc kubenswrapper[4827]: I0131 04:06:34.987803 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a","Type":"ContainerStarted","Data":"f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b"} Jan 31 04:06:35 crc kubenswrapper[4827]: I0131 04:06:35.996147 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a","Type":"ContainerStarted","Data":"89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4"} Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.083664 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-jcb2z"] Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.085024 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jcb2z" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.095262 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jcb2z"] Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.177930 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-rsvl4"] Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.178848 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rsvl4" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.186913 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rsvl4"] Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.196002 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-051e-account-create-update-bdjbn"] Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.199901 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-051e-account-create-update-bdjbn" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.206654 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.207248 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-051e-account-create-update-bdjbn"] Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.222598 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d339ae6-8776-4cfc-8ed1-baea0d422d07-operator-scripts\") pod \"nova-api-db-create-jcb2z\" (UID: \"3d339ae6-8776-4cfc-8ed1-baea0d422d07\") " pod="openstack/nova-api-db-create-jcb2z" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.222689 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqkd9\" (UniqueName: \"kubernetes.io/projected/3d339ae6-8776-4cfc-8ed1-baea0d422d07-kube-api-access-kqkd9\") pod \"nova-api-db-create-jcb2z\" (UID: \"3d339ae6-8776-4cfc-8ed1-baea0d422d07\") " pod="openstack/nova-api-db-create-jcb2z" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.323723 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k96d\" (UniqueName: \"kubernetes.io/projected/781f3a29-4f57-4592-b5c7-c492daf58f6c-kube-api-access-4k96d\") pod \"nova-cell0-db-create-rsvl4\" (UID: \"781f3a29-4f57-4592-b5c7-c492daf58f6c\") " pod="openstack/nova-cell0-db-create-rsvl4" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.324041 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5vrm\" (UniqueName: \"kubernetes.io/projected/4c685922-00e2-4bd1-91d2-8810a5d45da3-kube-api-access-w5vrm\") pod \"nova-api-051e-account-create-update-bdjbn\" (UID: \"4c685922-00e2-4bd1-91d2-8810a5d45da3\") " pod="openstack/nova-api-051e-account-create-update-bdjbn" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.324084 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c685922-00e2-4bd1-91d2-8810a5d45da3-operator-scripts\") pod \"nova-api-051e-account-create-update-bdjbn\" (UID: \"4c685922-00e2-4bd1-91d2-8810a5d45da3\") " pod="openstack/nova-api-051e-account-create-update-bdjbn" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.324112 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d339ae6-8776-4cfc-8ed1-baea0d422d07-operator-scripts\") pod \"nova-api-db-create-jcb2z\" (UID: \"3d339ae6-8776-4cfc-8ed1-baea0d422d07\") " pod="openstack/nova-api-db-create-jcb2z" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.324162 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqkd9\" (UniqueName: \"kubernetes.io/projected/3d339ae6-8776-4cfc-8ed1-baea0d422d07-kube-api-access-kqkd9\") pod \"nova-api-db-create-jcb2z\" (UID: \"3d339ae6-8776-4cfc-8ed1-baea0d422d07\") " pod="openstack/nova-api-db-create-jcb2z" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.324223 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/781f3a29-4f57-4592-b5c7-c492daf58f6c-operator-scripts\") pod \"nova-cell0-db-create-rsvl4\" (UID: \"781f3a29-4f57-4592-b5c7-c492daf58f6c\") " pod="openstack/nova-cell0-db-create-rsvl4" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.324988 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d339ae6-8776-4cfc-8ed1-baea0d422d07-operator-scripts\") pod \"nova-api-db-create-jcb2z\" (UID: \"3d339ae6-8776-4cfc-8ed1-baea0d422d07\") " pod="openstack/nova-api-db-create-jcb2z" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.345402 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqkd9\" (UniqueName: \"kubernetes.io/projected/3d339ae6-8776-4cfc-8ed1-baea0d422d07-kube-api-access-kqkd9\") pod \"nova-api-db-create-jcb2z\" (UID: \"3d339ae6-8776-4cfc-8ed1-baea0d422d07\") " pod="openstack/nova-api-db-create-jcb2z" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.380455 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-ckptz"] Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.381532 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ckptz" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.390090 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-ckptz"] Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.402626 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-05ff-account-create-update-sgdb6"] Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.403670 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-05ff-account-create-update-sgdb6" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.408229 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.426979 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-05ff-account-create-update-sgdb6"] Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.436077 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jcb2z" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.436846 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k96d\" (UniqueName: \"kubernetes.io/projected/781f3a29-4f57-4592-b5c7-c492daf58f6c-kube-api-access-4k96d\") pod \"nova-cell0-db-create-rsvl4\" (UID: \"781f3a29-4f57-4592-b5c7-c492daf58f6c\") " pod="openstack/nova-cell0-db-create-rsvl4" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.436899 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5vrm\" (UniqueName: \"kubernetes.io/projected/4c685922-00e2-4bd1-91d2-8810a5d45da3-kube-api-access-w5vrm\") pod \"nova-api-051e-account-create-update-bdjbn\" (UID: \"4c685922-00e2-4bd1-91d2-8810a5d45da3\") " pod="openstack/nova-api-051e-account-create-update-bdjbn" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.436949 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c685922-00e2-4bd1-91d2-8810a5d45da3-operator-scripts\") pod \"nova-api-051e-account-create-update-bdjbn\" (UID: \"4c685922-00e2-4bd1-91d2-8810a5d45da3\") " pod="openstack/nova-api-051e-account-create-update-bdjbn" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.437073 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/781f3a29-4f57-4592-b5c7-c492daf58f6c-operator-scripts\") pod \"nova-cell0-db-create-rsvl4\" (UID: \"781f3a29-4f57-4592-b5c7-c492daf58f6c\") " pod="openstack/nova-cell0-db-create-rsvl4" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.438002 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/781f3a29-4f57-4592-b5c7-c492daf58f6c-operator-scripts\") pod \"nova-cell0-db-create-rsvl4\" (UID: \"781f3a29-4f57-4592-b5c7-c492daf58f6c\") " pod="openstack/nova-cell0-db-create-rsvl4" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.438330 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c685922-00e2-4bd1-91d2-8810a5d45da3-operator-scripts\") pod \"nova-api-051e-account-create-update-bdjbn\" (UID: \"4c685922-00e2-4bd1-91d2-8810a5d45da3\") " pod="openstack/nova-api-051e-account-create-update-bdjbn" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.491842 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k96d\" (UniqueName: \"kubernetes.io/projected/781f3a29-4f57-4592-b5c7-c492daf58f6c-kube-api-access-4k96d\") pod \"nova-cell0-db-create-rsvl4\" (UID: \"781f3a29-4f57-4592-b5c7-c492daf58f6c\") " pod="openstack/nova-cell0-db-create-rsvl4" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.492868 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rsvl4" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.493636 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5vrm\" (UniqueName: \"kubernetes.io/projected/4c685922-00e2-4bd1-91d2-8810a5d45da3-kube-api-access-w5vrm\") pod \"nova-api-051e-account-create-update-bdjbn\" (UID: \"4c685922-00e2-4bd1-91d2-8810a5d45da3\") " pod="openstack/nova-api-051e-account-create-update-bdjbn" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.531275 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-051e-account-create-update-bdjbn" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.538367 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkl8h\" (UniqueName: \"kubernetes.io/projected/81308c8d-3b7b-43d2-9364-b8e732430dfd-kube-api-access-fkl8h\") pod \"nova-cell1-db-create-ckptz\" (UID: \"81308c8d-3b7b-43d2-9364-b8e732430dfd\") " pod="openstack/nova-cell1-db-create-ckptz" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.538465 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwh4q\" (UniqueName: \"kubernetes.io/projected/965e3f3a-3029-44dc-bdb2-9889b8f90fbe-kube-api-access-jwh4q\") pod \"nova-cell0-05ff-account-create-update-sgdb6\" (UID: \"965e3f3a-3029-44dc-bdb2-9889b8f90fbe\") " pod="openstack/nova-cell0-05ff-account-create-update-sgdb6" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.538567 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81308c8d-3b7b-43d2-9364-b8e732430dfd-operator-scripts\") pod \"nova-cell1-db-create-ckptz\" (UID: \"81308c8d-3b7b-43d2-9364-b8e732430dfd\") " pod="openstack/nova-cell1-db-create-ckptz" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.538589 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/965e3f3a-3029-44dc-bdb2-9889b8f90fbe-operator-scripts\") pod \"nova-cell0-05ff-account-create-update-sgdb6\" (UID: \"965e3f3a-3029-44dc-bdb2-9889b8f90fbe\") " pod="openstack/nova-cell0-05ff-account-create-update-sgdb6" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.600648 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-de72-account-create-update-h2jtp"] Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.602008 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-de72-account-create-update-h2jtp" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.610320 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-de72-account-create-update-h2jtp"] Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.610738 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.639992 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81308c8d-3b7b-43d2-9364-b8e732430dfd-operator-scripts\") pod \"nova-cell1-db-create-ckptz\" (UID: \"81308c8d-3b7b-43d2-9364-b8e732430dfd\") " pod="openstack/nova-cell1-db-create-ckptz" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.640032 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/965e3f3a-3029-44dc-bdb2-9889b8f90fbe-operator-scripts\") pod \"nova-cell0-05ff-account-create-update-sgdb6\" (UID: \"965e3f3a-3029-44dc-bdb2-9889b8f90fbe\") " pod="openstack/nova-cell0-05ff-account-create-update-sgdb6" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.640068 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkl8h\" (UniqueName: \"kubernetes.io/projected/81308c8d-3b7b-43d2-9364-b8e732430dfd-kube-api-access-fkl8h\") pod \"nova-cell1-db-create-ckptz\" (UID: \"81308c8d-3b7b-43d2-9364-b8e732430dfd\") " pod="openstack/nova-cell1-db-create-ckptz" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.640118 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwh4q\" (UniqueName: \"kubernetes.io/projected/965e3f3a-3029-44dc-bdb2-9889b8f90fbe-kube-api-access-jwh4q\") pod \"nova-cell0-05ff-account-create-update-sgdb6\" (UID: \"965e3f3a-3029-44dc-bdb2-9889b8f90fbe\") " pod="openstack/nova-cell0-05ff-account-create-update-sgdb6" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.641298 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81308c8d-3b7b-43d2-9364-b8e732430dfd-operator-scripts\") pod \"nova-cell1-db-create-ckptz\" (UID: \"81308c8d-3b7b-43d2-9364-b8e732430dfd\") " pod="openstack/nova-cell1-db-create-ckptz" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.656157 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/965e3f3a-3029-44dc-bdb2-9889b8f90fbe-operator-scripts\") pod \"nova-cell0-05ff-account-create-update-sgdb6\" (UID: \"965e3f3a-3029-44dc-bdb2-9889b8f90fbe\") " pod="openstack/nova-cell0-05ff-account-create-update-sgdb6" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.673242 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkl8h\" (UniqueName: \"kubernetes.io/projected/81308c8d-3b7b-43d2-9364-b8e732430dfd-kube-api-access-fkl8h\") pod \"nova-cell1-db-create-ckptz\" (UID: \"81308c8d-3b7b-43d2-9364-b8e732430dfd\") " pod="openstack/nova-cell1-db-create-ckptz" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.680364 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwh4q\" (UniqueName: \"kubernetes.io/projected/965e3f3a-3029-44dc-bdb2-9889b8f90fbe-kube-api-access-jwh4q\") pod \"nova-cell0-05ff-account-create-update-sgdb6\" (UID: \"965e3f3a-3029-44dc-bdb2-9889b8f90fbe\") " pod="openstack/nova-cell0-05ff-account-create-update-sgdb6" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.697736 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ckptz" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.721899 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.745361 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/313bafd6-600c-448b-aa68-7cf6a5742a68-operator-scripts\") pod \"nova-cell1-de72-account-create-update-h2jtp\" (UID: \"313bafd6-600c-448b-aa68-7cf6a5742a68\") " pod="openstack/nova-cell1-de72-account-create-update-h2jtp" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.745450 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6gtw\" (UniqueName: \"kubernetes.io/projected/313bafd6-600c-448b-aa68-7cf6a5742a68-kube-api-access-g6gtw\") pod \"nova-cell1-de72-account-create-update-h2jtp\" (UID: \"313bafd6-600c-448b-aa68-7cf6a5742a68\") " pod="openstack/nova-cell1-de72-account-create-update-h2jtp" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.846936 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/313bafd6-600c-448b-aa68-7cf6a5742a68-operator-scripts\") pod \"nova-cell1-de72-account-create-update-h2jtp\" (UID: \"313bafd6-600c-448b-aa68-7cf6a5742a68\") " pod="openstack/nova-cell1-de72-account-create-update-h2jtp" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.847278 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6gtw\" (UniqueName: \"kubernetes.io/projected/313bafd6-600c-448b-aa68-7cf6a5742a68-kube-api-access-g6gtw\") pod \"nova-cell1-de72-account-create-update-h2jtp\" (UID: \"313bafd6-600c-448b-aa68-7cf6a5742a68\") " pod="openstack/nova-cell1-de72-account-create-update-h2jtp" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.849916 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/313bafd6-600c-448b-aa68-7cf6a5742a68-operator-scripts\") pod \"nova-cell1-de72-account-create-update-h2jtp\" (UID: \"313bafd6-600c-448b-aa68-7cf6a5742a68\") " pod="openstack/nova-cell1-de72-account-create-update-h2jtp" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.868616 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6gtw\" (UniqueName: \"kubernetes.io/projected/313bafd6-600c-448b-aa68-7cf6a5742a68-kube-api-access-g6gtw\") pod \"nova-cell1-de72-account-create-update-h2jtp\" (UID: \"313bafd6-600c-448b-aa68-7cf6a5742a68\") " pod="openstack/nova-cell1-de72-account-create-update-h2jtp" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.878526 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-77b487f776-cjb8n" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.881432 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-05ff-account-create-update-sgdb6" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.962234 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-de72-account-create-update-h2jtp" Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.963981 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-69d4f5d848-hjbmc"] Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.964265 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-69d4f5d848-hjbmc" podUID="c0898c62-7d0f-447a-84b0-7627b4b78457" containerName="placement-log" containerID="cri-o://b20d2f86ea05804700f4777cbe97ce191a4645edded5f4dcbcc633866e8b2536" gracePeriod=30 Jan 31 04:06:37 crc kubenswrapper[4827]: I0131 04:06:37.964345 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-69d4f5d848-hjbmc" podUID="c0898c62-7d0f-447a-84b0-7627b4b78457" containerName="placement-api" containerID="cri-o://1361b3a6a3130a7bf2a34329ee1f22ef50673a80037a4802feebfce21bc7579e" gracePeriod=30 Jan 31 04:06:38 crc kubenswrapper[4827]: I0131 04:06:38.019138 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a","Type":"ContainerStarted","Data":"1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4"} Jan 31 04:06:38 crc kubenswrapper[4827]: I0131 04:06:38.019660 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 04:06:38 crc kubenswrapper[4827]: I0131 04:06:38.044968 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.458052766 podStartE2EDuration="6.044954429s" podCreationTimestamp="2026-01-31 04:06:32 +0000 UTC" firstStartedPulling="2026-01-31 04:06:32.833023678 +0000 UTC m=+1185.520104127" lastFinishedPulling="2026-01-31 04:06:37.419925351 +0000 UTC m=+1190.107005790" observedRunningTime="2026-01-31 04:06:38.035721677 +0000 UTC m=+1190.722802126" watchObservedRunningTime="2026-01-31 04:06:38.044954429 +0000 UTC m=+1190.732034878" Jan 31 04:06:38 crc kubenswrapper[4827]: I0131 04:06:38.102581 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jcb2z"] Jan 31 04:06:38 crc kubenswrapper[4827]: I0131 04:06:38.225962 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rsvl4"] Jan 31 04:06:38 crc kubenswrapper[4827]: I0131 04:06:38.396982 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-051e-account-create-update-bdjbn"] Jan 31 04:06:38 crc kubenswrapper[4827]: I0131 04:06:38.445017 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-ckptz"] Jan 31 04:06:38 crc kubenswrapper[4827]: I0131 04:06:38.470601 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-05ff-account-create-update-sgdb6"] Jan 31 04:06:38 crc kubenswrapper[4827]: I0131 04:06:38.802460 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-de72-account-create-update-h2jtp"] Jan 31 04:06:39 crc kubenswrapper[4827]: I0131 04:06:39.039651 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-05ff-account-create-update-sgdb6" event={"ID":"965e3f3a-3029-44dc-bdb2-9889b8f90fbe","Type":"ContainerStarted","Data":"663eb4744b59d4bce6c3dd1d4f8b6c5a46f37a515e993bd92241f912a81aea97"} Jan 31 04:06:39 crc kubenswrapper[4827]: I0131 04:06:39.039807 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-05ff-account-create-update-sgdb6" event={"ID":"965e3f3a-3029-44dc-bdb2-9889b8f90fbe","Type":"ContainerStarted","Data":"cf00305d775d4984af7d0edaee27c630ba0cb41736795807711bd8db3a5ecd30"} Jan 31 04:06:39 crc kubenswrapper[4827]: I0131 04:06:39.046498 4827 generic.go:334] "Generic (PLEG): container finished" podID="c0898c62-7d0f-447a-84b0-7627b4b78457" containerID="b20d2f86ea05804700f4777cbe97ce191a4645edded5f4dcbcc633866e8b2536" exitCode=143 Jan 31 04:06:39 crc kubenswrapper[4827]: I0131 04:06:39.046575 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d4f5d848-hjbmc" event={"ID":"c0898c62-7d0f-447a-84b0-7627b4b78457","Type":"ContainerDied","Data":"b20d2f86ea05804700f4777cbe97ce191a4645edded5f4dcbcc633866e8b2536"} Jan 31 04:06:39 crc kubenswrapper[4827]: I0131 04:06:39.049173 4827 generic.go:334] "Generic (PLEG): container finished" podID="781f3a29-4f57-4592-b5c7-c492daf58f6c" containerID="4320f6f041a9d9e1712bd7029b4e366165bb1d147029047465b735cd12d5620d" exitCode=0 Jan 31 04:06:39 crc kubenswrapper[4827]: I0131 04:06:39.049218 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rsvl4" event={"ID":"781f3a29-4f57-4592-b5c7-c492daf58f6c","Type":"ContainerDied","Data":"4320f6f041a9d9e1712bd7029b4e366165bb1d147029047465b735cd12d5620d"} Jan 31 04:06:39 crc kubenswrapper[4827]: I0131 04:06:39.049233 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rsvl4" event={"ID":"781f3a29-4f57-4592-b5c7-c492daf58f6c","Type":"ContainerStarted","Data":"13705f2e9b3d5d3850369efc48dafd07c9d1fd17e7d2cdef8b1a632b9e502e37"} Jan 31 04:06:39 crc kubenswrapper[4827]: I0131 04:06:39.051561 4827 generic.go:334] "Generic (PLEG): container finished" podID="3d339ae6-8776-4cfc-8ed1-baea0d422d07" containerID="255c20d1b4302c4a2a4b1f51128524c5fc0025055214b94554a7fd5fb0148e46" exitCode=0 Jan 31 04:06:39 crc kubenswrapper[4827]: I0131 04:06:39.051641 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jcb2z" event={"ID":"3d339ae6-8776-4cfc-8ed1-baea0d422d07","Type":"ContainerDied","Data":"255c20d1b4302c4a2a4b1f51128524c5fc0025055214b94554a7fd5fb0148e46"} Jan 31 04:06:39 crc kubenswrapper[4827]: I0131 04:06:39.051710 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jcb2z" event={"ID":"3d339ae6-8776-4cfc-8ed1-baea0d422d07","Type":"ContainerStarted","Data":"5a976d3c56f1622e400779bda7f93b05978ea2c310bf47eddb459edef2f9e833"} Jan 31 04:06:39 crc kubenswrapper[4827]: I0131 04:06:39.055246 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-05ff-account-create-update-sgdb6" podStartSLOduration=2.055230973 podStartE2EDuration="2.055230973s" podCreationTimestamp="2026-01-31 04:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:06:39.05510584 +0000 UTC m=+1191.742186289" watchObservedRunningTime="2026-01-31 04:06:39.055230973 +0000 UTC m=+1191.742311422" Jan 31 04:06:39 crc kubenswrapper[4827]: I0131 04:06:39.056962 4827 generic.go:334] "Generic (PLEG): container finished" podID="81308c8d-3b7b-43d2-9364-b8e732430dfd" containerID="923d1d9733a967a69a6ab240a89912116fc736c7ddbd55c624ccc26449046231" exitCode=0 Jan 31 04:06:39 crc kubenswrapper[4827]: I0131 04:06:39.057023 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ckptz" event={"ID":"81308c8d-3b7b-43d2-9364-b8e732430dfd","Type":"ContainerDied","Data":"923d1d9733a967a69a6ab240a89912116fc736c7ddbd55c624ccc26449046231"} Jan 31 04:06:39 crc kubenswrapper[4827]: I0131 04:06:39.057048 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ckptz" event={"ID":"81308c8d-3b7b-43d2-9364-b8e732430dfd","Type":"ContainerStarted","Data":"9e5de2944d8528a9a4650d0c091acc3c287c634b2877f5b1190a1e08b42a6ac9"} Jan 31 04:06:39 crc kubenswrapper[4827]: I0131 04:06:39.064660 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-de72-account-create-update-h2jtp" event={"ID":"313bafd6-600c-448b-aa68-7cf6a5742a68","Type":"ContainerStarted","Data":"a2ee41b03be62056bbfb829764ff685e5d407e417826e887a7f88f044e47856f"} Jan 31 04:06:39 crc kubenswrapper[4827]: I0131 04:06:39.064701 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-de72-account-create-update-h2jtp" event={"ID":"313bafd6-600c-448b-aa68-7cf6a5742a68","Type":"ContainerStarted","Data":"952fcf41aba554f227251f917ab39687d89e022c8a89aea9221bee5a572d7dad"} Jan 31 04:06:39 crc kubenswrapper[4827]: I0131 04:06:39.071327 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-051e-account-create-update-bdjbn" event={"ID":"4c685922-00e2-4bd1-91d2-8810a5d45da3","Type":"ContainerStarted","Data":"207a0998d4ef930dda1db4d2f34029681d29c6de17e1e5691dd889131e4ce178"} Jan 31 04:06:39 crc kubenswrapper[4827]: I0131 04:06:39.071370 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-051e-account-create-update-bdjbn" event={"ID":"4c685922-00e2-4bd1-91d2-8810a5d45da3","Type":"ContainerStarted","Data":"1584ca5a0f2f98f7a87e30d1f12333b10106c96865f94e8d224d89cc4d15964f"} Jan 31 04:06:39 crc kubenswrapper[4827]: I0131 04:06:39.109338 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-de72-account-create-update-h2jtp" podStartSLOduration=2.10932263 podStartE2EDuration="2.10932263s" podCreationTimestamp="2026-01-31 04:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:06:39.106924792 +0000 UTC m=+1191.794005251" watchObservedRunningTime="2026-01-31 04:06:39.10932263 +0000 UTC m=+1191.796403079" Jan 31 04:06:39 crc kubenswrapper[4827]: I0131 04:06:39.396947 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.078647 4827 generic.go:334] "Generic (PLEG): container finished" podID="313bafd6-600c-448b-aa68-7cf6a5742a68" containerID="a2ee41b03be62056bbfb829764ff685e5d407e417826e887a7f88f044e47856f" exitCode=0 Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.078739 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-de72-account-create-update-h2jtp" event={"ID":"313bafd6-600c-448b-aa68-7cf6a5742a68","Type":"ContainerDied","Data":"a2ee41b03be62056bbfb829764ff685e5d407e417826e887a7f88f044e47856f"} Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.080645 4827 generic.go:334] "Generic (PLEG): container finished" podID="4c685922-00e2-4bd1-91d2-8810a5d45da3" containerID="207a0998d4ef930dda1db4d2f34029681d29c6de17e1e5691dd889131e4ce178" exitCode=0 Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.080707 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-051e-account-create-update-bdjbn" event={"ID":"4c685922-00e2-4bd1-91d2-8810a5d45da3","Type":"ContainerDied","Data":"207a0998d4ef930dda1db4d2f34029681d29c6de17e1e5691dd889131e4ce178"} Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.081977 4827 generic.go:334] "Generic (PLEG): container finished" podID="965e3f3a-3029-44dc-bdb2-9889b8f90fbe" containerID="663eb4744b59d4bce6c3dd1d4f8b6c5a46f37a515e993bd92241f912a81aea97" exitCode=0 Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.082168 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-05ff-account-create-update-sgdb6" event={"ID":"965e3f3a-3029-44dc-bdb2-9889b8f90fbe","Type":"ContainerDied","Data":"663eb4744b59d4bce6c3dd1d4f8b6c5a46f37a515e993bd92241f912a81aea97"} Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.082281 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" containerName="ceilometer-central-agent" containerID="cri-o://06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e" gracePeriod=30 Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.082365 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" containerName="proxy-httpd" containerID="cri-o://1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4" gracePeriod=30 Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.082408 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" containerName="sg-core" containerID="cri-o://89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4" gracePeriod=30 Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.082440 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" containerName="ceilometer-notification-agent" containerID="cri-o://f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b" gracePeriod=30 Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.597200 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.709386 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ckptz" Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.731675 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rsvl4" Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.734699 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-051e-account-create-update-bdjbn" Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.742790 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jcb2z" Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.818533 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d339ae6-8776-4cfc-8ed1-baea0d422d07-operator-scripts\") pod \"3d339ae6-8776-4cfc-8ed1-baea0d422d07\" (UID: \"3d339ae6-8776-4cfc-8ed1-baea0d422d07\") " Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.818608 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/781f3a29-4f57-4592-b5c7-c492daf58f6c-operator-scripts\") pod \"781f3a29-4f57-4592-b5c7-c492daf58f6c\" (UID: \"781f3a29-4f57-4592-b5c7-c492daf58f6c\") " Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.818637 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k96d\" (UniqueName: \"kubernetes.io/projected/781f3a29-4f57-4592-b5c7-c492daf58f6c-kube-api-access-4k96d\") pod \"781f3a29-4f57-4592-b5c7-c492daf58f6c\" (UID: \"781f3a29-4f57-4592-b5c7-c492daf58f6c\") " Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.818683 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c685922-00e2-4bd1-91d2-8810a5d45da3-operator-scripts\") pod \"4c685922-00e2-4bd1-91d2-8810a5d45da3\" (UID: \"4c685922-00e2-4bd1-91d2-8810a5d45da3\") " Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.818723 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81308c8d-3b7b-43d2-9364-b8e732430dfd-operator-scripts\") pod \"81308c8d-3b7b-43d2-9364-b8e732430dfd\" (UID: \"81308c8d-3b7b-43d2-9364-b8e732430dfd\") " Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.818764 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkl8h\" (UniqueName: \"kubernetes.io/projected/81308c8d-3b7b-43d2-9364-b8e732430dfd-kube-api-access-fkl8h\") pod \"81308c8d-3b7b-43d2-9364-b8e732430dfd\" (UID: \"81308c8d-3b7b-43d2-9364-b8e732430dfd\") " Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.818798 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5vrm\" (UniqueName: \"kubernetes.io/projected/4c685922-00e2-4bd1-91d2-8810a5d45da3-kube-api-access-w5vrm\") pod \"4c685922-00e2-4bd1-91d2-8810a5d45da3\" (UID: \"4c685922-00e2-4bd1-91d2-8810a5d45da3\") " Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.818827 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqkd9\" (UniqueName: \"kubernetes.io/projected/3d339ae6-8776-4cfc-8ed1-baea0d422d07-kube-api-access-kqkd9\") pod \"3d339ae6-8776-4cfc-8ed1-baea0d422d07\" (UID: \"3d339ae6-8776-4cfc-8ed1-baea0d422d07\") " Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.819411 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/781f3a29-4f57-4592-b5c7-c492daf58f6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "781f3a29-4f57-4592-b5c7-c492daf58f6c" (UID: "781f3a29-4f57-4592-b5c7-c492daf58f6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.819427 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81308c8d-3b7b-43d2-9364-b8e732430dfd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81308c8d-3b7b-43d2-9364-b8e732430dfd" (UID: "81308c8d-3b7b-43d2-9364-b8e732430dfd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.819546 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c685922-00e2-4bd1-91d2-8810a5d45da3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c685922-00e2-4bd1-91d2-8810a5d45da3" (UID: "4c685922-00e2-4bd1-91d2-8810a5d45da3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.819825 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d339ae6-8776-4cfc-8ed1-baea0d422d07-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d339ae6-8776-4cfc-8ed1-baea0d422d07" (UID: "3d339ae6-8776-4cfc-8ed1-baea0d422d07"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.825513 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c685922-00e2-4bd1-91d2-8810a5d45da3-kube-api-access-w5vrm" (OuterVolumeSpecName: "kube-api-access-w5vrm") pod "4c685922-00e2-4bd1-91d2-8810a5d45da3" (UID: "4c685922-00e2-4bd1-91d2-8810a5d45da3"). InnerVolumeSpecName "kube-api-access-w5vrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.826576 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/781f3a29-4f57-4592-b5c7-c492daf58f6c-kube-api-access-4k96d" (OuterVolumeSpecName: "kube-api-access-4k96d") pod "781f3a29-4f57-4592-b5c7-c492daf58f6c" (UID: "781f3a29-4f57-4592-b5c7-c492daf58f6c"). InnerVolumeSpecName "kube-api-access-4k96d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.828264 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81308c8d-3b7b-43d2-9364-b8e732430dfd-kube-api-access-fkl8h" (OuterVolumeSpecName: "kube-api-access-fkl8h") pod "81308c8d-3b7b-43d2-9364-b8e732430dfd" (UID: "81308c8d-3b7b-43d2-9364-b8e732430dfd"). InnerVolumeSpecName "kube-api-access-fkl8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.834326 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d339ae6-8776-4cfc-8ed1-baea0d422d07-kube-api-access-kqkd9" (OuterVolumeSpecName: "kube-api-access-kqkd9") pod "3d339ae6-8776-4cfc-8ed1-baea0d422d07" (UID: "3d339ae6-8776-4cfc-8ed1-baea0d422d07"). InnerVolumeSpecName "kube-api-access-kqkd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.927974 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81308c8d-3b7b-43d2-9364-b8e732430dfd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.928017 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkl8h\" (UniqueName: \"kubernetes.io/projected/81308c8d-3b7b-43d2-9364-b8e732430dfd-kube-api-access-fkl8h\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.928032 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5vrm\" (UniqueName: \"kubernetes.io/projected/4c685922-00e2-4bd1-91d2-8810a5d45da3-kube-api-access-w5vrm\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.928043 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqkd9\" (UniqueName: \"kubernetes.io/projected/3d339ae6-8776-4cfc-8ed1-baea0d422d07-kube-api-access-kqkd9\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.928055 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d339ae6-8776-4cfc-8ed1-baea0d422d07-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.928066 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/781f3a29-4f57-4592-b5c7-c492daf58f6c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.928076 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k96d\" (UniqueName: \"kubernetes.io/projected/781f3a29-4f57-4592-b5c7-c492daf58f6c-kube-api-access-4k96d\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.928087 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c685922-00e2-4bd1-91d2-8810a5d45da3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:40 crc kubenswrapper[4827]: I0131 04:06:40.949861 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.029311 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-config-data\") pod \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.029383 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-combined-ca-bundle\") pod \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.029425 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfs56\" (UniqueName: \"kubernetes.io/projected/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-kube-api-access-dfs56\") pod \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.029563 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-run-httpd\") pod \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.029601 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-sg-core-conf-yaml\") pod \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.029683 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-log-httpd\") pod \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.029741 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-scripts\") pod \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\" (UID: \"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a\") " Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.030051 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" (UID: "0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.030418 4827 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.030451 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" (UID: "0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.034438 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-kube-api-access-dfs56" (OuterVolumeSpecName: "kube-api-access-dfs56") pod "0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" (UID: "0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a"). InnerVolumeSpecName "kube-api-access-dfs56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.034806 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-scripts" (OuterVolumeSpecName: "scripts") pod "0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" (UID: "0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.058259 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" (UID: "0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.099785 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rsvl4" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.101987 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rsvl4" event={"ID":"781f3a29-4f57-4592-b5c7-c492daf58f6c","Type":"ContainerDied","Data":"13705f2e9b3d5d3850369efc48dafd07c9d1fd17e7d2cdef8b1a632b9e502e37"} Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.102038 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13705f2e9b3d5d3850369efc48dafd07c9d1fd17e7d2cdef8b1a632b9e502e37" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.102437 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" (UID: "0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.104070 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jcb2z" event={"ID":"3d339ae6-8776-4cfc-8ed1-baea0d422d07","Type":"ContainerDied","Data":"5a976d3c56f1622e400779bda7f93b05978ea2c310bf47eddb459edef2f9e833"} Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.104108 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a976d3c56f1622e400779bda7f93b05978ea2c310bf47eddb459edef2f9e833" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.104163 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jcb2z" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.107420 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ckptz" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.107407 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ckptz" event={"ID":"81308c8d-3b7b-43d2-9364-b8e732430dfd","Type":"ContainerDied","Data":"9e5de2944d8528a9a4650d0c091acc3c287c634b2877f5b1190a1e08b42a6ac9"} Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.107570 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e5de2944d8528a9a4650d0c091acc3c287c634b2877f5b1190a1e08b42a6ac9" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.109431 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-051e-account-create-update-bdjbn" event={"ID":"4c685922-00e2-4bd1-91d2-8810a5d45da3","Type":"ContainerDied","Data":"1584ca5a0f2f98f7a87e30d1f12333b10106c96865f94e8d224d89cc4d15964f"} Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.109459 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1584ca5a0f2f98f7a87e30d1f12333b10106c96865f94e8d224d89cc4d15964f" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.109642 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-051e-account-create-update-bdjbn" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.114448 4827 generic.go:334] "Generic (PLEG): container finished" podID="0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" containerID="1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4" exitCode=0 Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.114496 4827 generic.go:334] "Generic (PLEG): container finished" podID="0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" containerID="89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4" exitCode=2 Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.114506 4827 generic.go:334] "Generic (PLEG): container finished" podID="0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" containerID="f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b" exitCode=0 Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.114514 4827 generic.go:334] "Generic (PLEG): container finished" podID="0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" containerID="06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e" exitCode=0 Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.114970 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a","Type":"ContainerDied","Data":"1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4"} Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.115089 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a","Type":"ContainerDied","Data":"89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4"} Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.115107 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a","Type":"ContainerDied","Data":"f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b"} Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.115120 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a","Type":"ContainerDied","Data":"06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e"} Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.115133 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a","Type":"ContainerDied","Data":"e95f356b558d4fb3da0b06ea6b45a87ef991a0f7fadf7c813bf99397e83f2f7e"} Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.115164 4827 scope.go:117] "RemoveContainer" containerID="1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.115202 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.132400 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.132497 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfs56\" (UniqueName: \"kubernetes.io/projected/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-kube-api-access-dfs56\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.132512 4827 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.132525 4827 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.132537 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.141079 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-config-data" (OuterVolumeSpecName: "config-data") pod "0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" (UID: "0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.195245 4827 scope.go:117] "RemoveContainer" containerID="89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.233869 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.235549 4827 scope.go:117] "RemoveContainer" containerID="f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.261001 4827 scope.go:117] "RemoveContainer" containerID="06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.287463 4827 scope.go:117] "RemoveContainer" containerID="1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4" Jan 31 04:06:41 crc kubenswrapper[4827]: E0131 04:06:41.288658 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4\": container with ID starting with 1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4 not found: ID does not exist" containerID="1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.288711 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4"} err="failed to get container status \"1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4\": rpc error: code = NotFound desc = could not find container \"1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4\": container with ID starting with 1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4 not found: ID does not exist" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.288743 4827 scope.go:117] "RemoveContainer" containerID="89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4" Jan 31 04:06:41 crc kubenswrapper[4827]: E0131 04:06:41.289094 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4\": container with ID starting with 89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4 not found: ID does not exist" containerID="89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.289115 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4"} err="failed to get container status \"89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4\": rpc error: code = NotFound desc = could not find container \"89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4\": container with ID starting with 89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4 not found: ID does not exist" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.289131 4827 scope.go:117] "RemoveContainer" containerID="f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b" Jan 31 04:06:41 crc kubenswrapper[4827]: E0131 04:06:41.289394 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b\": container with ID starting with f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b not found: ID does not exist" containerID="f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.289413 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b"} err="failed to get container status \"f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b\": rpc error: code = NotFound desc = could not find container \"f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b\": container with ID starting with f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b not found: ID does not exist" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.289428 4827 scope.go:117] "RemoveContainer" containerID="06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e" Jan 31 04:06:41 crc kubenswrapper[4827]: E0131 04:06:41.289656 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e\": container with ID starting with 06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e not found: ID does not exist" containerID="06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.289673 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e"} err="failed to get container status \"06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e\": rpc error: code = NotFound desc = could not find container \"06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e\": container with ID starting with 06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e not found: ID does not exist" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.289690 4827 scope.go:117] "RemoveContainer" containerID="1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.289969 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4"} err="failed to get container status \"1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4\": rpc error: code = NotFound desc = could not find container \"1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4\": container with ID starting with 1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4 not found: ID does not exist" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.289997 4827 scope.go:117] "RemoveContainer" containerID="89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.290366 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4"} err="failed to get container status \"89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4\": rpc error: code = NotFound desc = could not find container \"89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4\": container with ID starting with 89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4 not found: ID does not exist" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.290429 4827 scope.go:117] "RemoveContainer" containerID="f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.290842 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b"} err="failed to get container status \"f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b\": rpc error: code = NotFound desc = could not find container \"f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b\": container with ID starting with f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b not found: ID does not exist" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.290899 4827 scope.go:117] "RemoveContainer" containerID="06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.291271 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e"} err="failed to get container status \"06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e\": rpc error: code = NotFound desc = could not find container \"06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e\": container with ID starting with 06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e not found: ID does not exist" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.291310 4827 scope.go:117] "RemoveContainer" containerID="1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.291677 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4"} err="failed to get container status \"1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4\": rpc error: code = NotFound desc = could not find container \"1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4\": container with ID starting with 1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4 not found: ID does not exist" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.291766 4827 scope.go:117] "RemoveContainer" containerID="89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.292131 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4"} err="failed to get container status \"89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4\": rpc error: code = NotFound desc = could not find container \"89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4\": container with ID starting with 89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4 not found: ID does not exist" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.292152 4827 scope.go:117] "RemoveContainer" containerID="f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.292454 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b"} err="failed to get container status \"f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b\": rpc error: code = NotFound desc = could not find container \"f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b\": container with ID starting with f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b not found: ID does not exist" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.292472 4827 scope.go:117] "RemoveContainer" containerID="06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.292805 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e"} err="failed to get container status \"06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e\": rpc error: code = NotFound desc = could not find container \"06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e\": container with ID starting with 06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e not found: ID does not exist" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.292830 4827 scope.go:117] "RemoveContainer" containerID="1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.296136 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4"} err="failed to get container status \"1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4\": rpc error: code = NotFound desc = could not find container \"1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4\": container with ID starting with 1300dc9289418101828461e018210d0fd7e233c1d0d4eb15c287f91163f34fd4 not found: ID does not exist" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.296159 4827 scope.go:117] "RemoveContainer" containerID="89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.297124 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4"} err="failed to get container status \"89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4\": rpc error: code = NotFound desc = could not find container \"89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4\": container with ID starting with 89cbb767c6db1df039c0ddca93f73535488680723449f3b9931ccb7bcf3ba3c4 not found: ID does not exist" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.297142 4827 scope.go:117] "RemoveContainer" containerID="f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.297355 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b"} err="failed to get container status \"f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b\": rpc error: code = NotFound desc = could not find container \"f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b\": container with ID starting with f830be3f0fea9fbb19807639071df46885faa4dc100ebf44dd8cc63e8fcbfd9b not found: ID does not exist" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.297374 4827 scope.go:117] "RemoveContainer" containerID="06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.297614 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e"} err="failed to get container status \"06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e\": rpc error: code = NotFound desc = could not find container \"06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e\": container with ID starting with 06de984296c65210f4d7a9e638706b14e4ad00a9a28ad936cd0c1d31e356133e not found: ID does not exist" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.420217 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-05ff-account-create-update-sgdb6" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.448708 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/965e3f3a-3029-44dc-bdb2-9889b8f90fbe-operator-scripts\") pod \"965e3f3a-3029-44dc-bdb2-9889b8f90fbe\" (UID: \"965e3f3a-3029-44dc-bdb2-9889b8f90fbe\") " Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.448807 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwh4q\" (UniqueName: \"kubernetes.io/projected/965e3f3a-3029-44dc-bdb2-9889b8f90fbe-kube-api-access-jwh4q\") pod \"965e3f3a-3029-44dc-bdb2-9889b8f90fbe\" (UID: \"965e3f3a-3029-44dc-bdb2-9889b8f90fbe\") " Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.449268 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/965e3f3a-3029-44dc-bdb2-9889b8f90fbe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "965e3f3a-3029-44dc-bdb2-9889b8f90fbe" (UID: "965e3f3a-3029-44dc-bdb2-9889b8f90fbe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.454399 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/965e3f3a-3029-44dc-bdb2-9889b8f90fbe-kube-api-access-jwh4q" (OuterVolumeSpecName: "kube-api-access-jwh4q") pod "965e3f3a-3029-44dc-bdb2-9889b8f90fbe" (UID: "965e3f3a-3029-44dc-bdb2-9889b8f90fbe"). InnerVolumeSpecName "kube-api-access-jwh4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.454799 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwh4q\" (UniqueName: \"kubernetes.io/projected/965e3f3a-3029-44dc-bdb2-9889b8f90fbe-kube-api-access-jwh4q\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.454832 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/965e3f3a-3029-44dc-bdb2-9889b8f90fbe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.579322 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.585133 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.585820 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-de72-account-create-update-h2jtp" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.612567 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:41 crc kubenswrapper[4827]: E0131 04:06:41.612978 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" containerName="proxy-httpd" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.612992 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" containerName="proxy-httpd" Jan 31 04:06:41 crc kubenswrapper[4827]: E0131 04:06:41.613004 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" containerName="ceilometer-notification-agent" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.613011 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" containerName="ceilometer-notification-agent" Jan 31 04:06:41 crc kubenswrapper[4827]: E0131 04:06:41.613027 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" containerName="sg-core" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.613033 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" containerName="sg-core" Jan 31 04:06:41 crc kubenswrapper[4827]: E0131 04:06:41.613041 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="781f3a29-4f57-4592-b5c7-c492daf58f6c" containerName="mariadb-database-create" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.613047 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="781f3a29-4f57-4592-b5c7-c492daf58f6c" containerName="mariadb-database-create" Jan 31 04:06:41 crc kubenswrapper[4827]: E0131 04:06:41.613058 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313bafd6-600c-448b-aa68-7cf6a5742a68" containerName="mariadb-account-create-update" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.613063 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="313bafd6-600c-448b-aa68-7cf6a5742a68" containerName="mariadb-account-create-update" Jan 31 04:06:41 crc kubenswrapper[4827]: E0131 04:06:41.613071 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="965e3f3a-3029-44dc-bdb2-9889b8f90fbe" containerName="mariadb-account-create-update" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.613077 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="965e3f3a-3029-44dc-bdb2-9889b8f90fbe" containerName="mariadb-account-create-update" Jan 31 04:06:41 crc kubenswrapper[4827]: E0131 04:06:41.613086 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c685922-00e2-4bd1-91d2-8810a5d45da3" containerName="mariadb-account-create-update" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.613092 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c685922-00e2-4bd1-91d2-8810a5d45da3" containerName="mariadb-account-create-update" Jan 31 04:06:41 crc kubenswrapper[4827]: E0131 04:06:41.613106 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" containerName="ceilometer-central-agent" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.613111 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" containerName="ceilometer-central-agent" Jan 31 04:06:41 crc kubenswrapper[4827]: E0131 04:06:41.613119 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d339ae6-8776-4cfc-8ed1-baea0d422d07" containerName="mariadb-database-create" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.613125 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d339ae6-8776-4cfc-8ed1-baea0d422d07" containerName="mariadb-database-create" Jan 31 04:06:41 crc kubenswrapper[4827]: E0131 04:06:41.613136 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81308c8d-3b7b-43d2-9364-b8e732430dfd" containerName="mariadb-database-create" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.613142 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="81308c8d-3b7b-43d2-9364-b8e732430dfd" containerName="mariadb-database-create" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.613289 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d339ae6-8776-4cfc-8ed1-baea0d422d07" containerName="mariadb-database-create" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.613299 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="781f3a29-4f57-4592-b5c7-c492daf58f6c" containerName="mariadb-database-create" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.613311 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="965e3f3a-3029-44dc-bdb2-9889b8f90fbe" containerName="mariadb-account-create-update" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.613324 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" containerName="proxy-httpd" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.613335 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" containerName="sg-core" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.613345 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" containerName="ceilometer-notification-agent" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.613353 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c685922-00e2-4bd1-91d2-8810a5d45da3" containerName="mariadb-account-create-update" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.613361 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="81308c8d-3b7b-43d2-9364-b8e732430dfd" containerName="mariadb-database-create" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.613370 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" containerName="ceilometer-central-agent" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.613378 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="313bafd6-600c-448b-aa68-7cf6a5742a68" containerName="mariadb-account-create-update" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.614955 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.617465 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.617736 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.638092 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.659722 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/313bafd6-600c-448b-aa68-7cf6a5742a68-operator-scripts\") pod \"313bafd6-600c-448b-aa68-7cf6a5742a68\" (UID: \"313bafd6-600c-448b-aa68-7cf6a5742a68\") " Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.659996 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6gtw\" (UniqueName: \"kubernetes.io/projected/313bafd6-600c-448b-aa68-7cf6a5742a68-kube-api-access-g6gtw\") pod \"313bafd6-600c-448b-aa68-7cf6a5742a68\" (UID: \"313bafd6-600c-448b-aa68-7cf6a5742a68\") " Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.660243 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c70cf0b-7e54-41ae-806b-0109d6780de7-log-httpd\") pod \"ceilometer-0\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.660336 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-scripts\") pod \"ceilometer-0\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.660456 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-config-data\") pod \"ceilometer-0\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.660502 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.660533 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24w4n\" (UniqueName: \"kubernetes.io/projected/0c70cf0b-7e54-41ae-806b-0109d6780de7-kube-api-access-24w4n\") pod \"ceilometer-0\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.660552 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.660571 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c70cf0b-7e54-41ae-806b-0109d6780de7-run-httpd\") pod \"ceilometer-0\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.660928 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/313bafd6-600c-448b-aa68-7cf6a5742a68-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "313bafd6-600c-448b-aa68-7cf6a5742a68" (UID: "313bafd6-600c-448b-aa68-7cf6a5742a68"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.662722 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/313bafd6-600c-448b-aa68-7cf6a5742a68-kube-api-access-g6gtw" (OuterVolumeSpecName: "kube-api-access-g6gtw") pod "313bafd6-600c-448b-aa68-7cf6a5742a68" (UID: "313bafd6-600c-448b-aa68-7cf6a5742a68"). InnerVolumeSpecName "kube-api-access-g6gtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.698211 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.761568 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-scripts\") pod \"c0898c62-7d0f-447a-84b0-7627b4b78457\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.761635 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0898c62-7d0f-447a-84b0-7627b4b78457-logs\") pod \"c0898c62-7d0f-447a-84b0-7627b4b78457\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.761678 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-config-data\") pod \"c0898c62-7d0f-447a-84b0-7627b4b78457\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.761704 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-internal-tls-certs\") pod \"c0898c62-7d0f-447a-84b0-7627b4b78457\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.761739 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-public-tls-certs\") pod \"c0898c62-7d0f-447a-84b0-7627b4b78457\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.761798 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwtxv\" (UniqueName: \"kubernetes.io/projected/c0898c62-7d0f-447a-84b0-7627b4b78457-kube-api-access-mwtxv\") pod \"c0898c62-7d0f-447a-84b0-7627b4b78457\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.761836 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-combined-ca-bundle\") pod \"c0898c62-7d0f-447a-84b0-7627b4b78457\" (UID: \"c0898c62-7d0f-447a-84b0-7627b4b78457\") " Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.762064 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.762112 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24w4n\" (UniqueName: \"kubernetes.io/projected/0c70cf0b-7e54-41ae-806b-0109d6780de7-kube-api-access-24w4n\") pod \"ceilometer-0\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.762141 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.762170 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c70cf0b-7e54-41ae-806b-0109d6780de7-run-httpd\") pod \"ceilometer-0\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.762235 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c70cf0b-7e54-41ae-806b-0109d6780de7-log-httpd\") pod \"ceilometer-0\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.762276 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0898c62-7d0f-447a-84b0-7627b4b78457-logs" (OuterVolumeSpecName: "logs") pod "c0898c62-7d0f-447a-84b0-7627b4b78457" (UID: "c0898c62-7d0f-447a-84b0-7627b4b78457"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.762351 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-scripts\") pod \"ceilometer-0\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.762383 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-config-data\") pod \"ceilometer-0\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.762469 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6gtw\" (UniqueName: \"kubernetes.io/projected/313bafd6-600c-448b-aa68-7cf6a5742a68-kube-api-access-g6gtw\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.762486 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/313bafd6-600c-448b-aa68-7cf6a5742a68-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.762498 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0898c62-7d0f-447a-84b0-7627b4b78457-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.762650 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c70cf0b-7e54-41ae-806b-0109d6780de7-log-httpd\") pod \"ceilometer-0\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.763272 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c70cf0b-7e54-41ae-806b-0109d6780de7-run-httpd\") pod \"ceilometer-0\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.771788 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.772155 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-scripts" (OuterVolumeSpecName: "scripts") pod "c0898c62-7d0f-447a-84b0-7627b4b78457" (UID: "c0898c62-7d0f-447a-84b0-7627b4b78457"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.772425 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.774480 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0898c62-7d0f-447a-84b0-7627b4b78457-kube-api-access-mwtxv" (OuterVolumeSpecName: "kube-api-access-mwtxv") pod "c0898c62-7d0f-447a-84b0-7627b4b78457" (UID: "c0898c62-7d0f-447a-84b0-7627b4b78457"). InnerVolumeSpecName "kube-api-access-mwtxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.775087 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-scripts\") pod \"ceilometer-0\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.776602 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-config-data\") pod \"ceilometer-0\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.796089 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24w4n\" (UniqueName: \"kubernetes.io/projected/0c70cf0b-7e54-41ae-806b-0109d6780de7-kube-api-access-24w4n\") pod \"ceilometer-0\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " pod="openstack/ceilometer-0" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.834117 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-config-data" (OuterVolumeSpecName: "config-data") pod "c0898c62-7d0f-447a-84b0-7627b4b78457" (UID: "c0898c62-7d0f-447a-84b0-7627b4b78457"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.836360 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0898c62-7d0f-447a-84b0-7627b4b78457" (UID: "c0898c62-7d0f-447a-84b0-7627b4b78457"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.851399 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c0898c62-7d0f-447a-84b0-7627b4b78457" (UID: "c0898c62-7d0f-447a-84b0-7627b4b78457"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.864158 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.864181 4827 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.864191 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwtxv\" (UniqueName: \"kubernetes.io/projected/c0898c62-7d0f-447a-84b0-7627b4b78457-kube-api-access-mwtxv\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.864201 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.864208 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.889808 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c0898c62-7d0f-447a-84b0-7627b4b78457" (UID: "c0898c62-7d0f-447a-84b0-7627b4b78457"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4827]: I0131 04:06:41.965677 4827 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0898c62-7d0f-447a-84b0-7627b4b78457-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.011628 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.134759 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-de72-account-create-update-h2jtp" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.140629 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-05ff-account-create-update-sgdb6" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.163517 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a" path="/var/lib/kubelet/pods/0b0acb78-5ef0-41e1-8d1a-1f4329c6e07a/volumes" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.165357 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-de72-account-create-update-h2jtp" event={"ID":"313bafd6-600c-448b-aa68-7cf6a5742a68","Type":"ContainerDied","Data":"952fcf41aba554f227251f917ab39687d89e022c8a89aea9221bee5a572d7dad"} Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.165382 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="952fcf41aba554f227251f917ab39687d89e022c8a89aea9221bee5a572d7dad" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.165393 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-05ff-account-create-update-sgdb6" event={"ID":"965e3f3a-3029-44dc-bdb2-9889b8f90fbe","Type":"ContainerDied","Data":"cf00305d775d4984af7d0edaee27c630ba0cb41736795807711bd8db3a5ecd30"} Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.165403 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf00305d775d4984af7d0edaee27c630ba0cb41736795807711bd8db3a5ecd30" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.192069 4827 generic.go:334] "Generic (PLEG): container finished" podID="c0898c62-7d0f-447a-84b0-7627b4b78457" containerID="1361b3a6a3130a7bf2a34329ee1f22ef50673a80037a4802feebfce21bc7579e" exitCode=0 Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.192121 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d4f5d848-hjbmc" event={"ID":"c0898c62-7d0f-447a-84b0-7627b4b78457","Type":"ContainerDied","Data":"1361b3a6a3130a7bf2a34329ee1f22ef50673a80037a4802feebfce21bc7579e"} Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.192130 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69d4f5d848-hjbmc" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.192151 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d4f5d848-hjbmc" event={"ID":"c0898c62-7d0f-447a-84b0-7627b4b78457","Type":"ContainerDied","Data":"122b62f3408d2975430c8302aaad839c1d7389fda1f4ef40f64126a9a21a487e"} Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.192174 4827 scope.go:117] "RemoveContainer" containerID="1361b3a6a3130a7bf2a34329ee1f22ef50673a80037a4802feebfce21bc7579e" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.238313 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-69d4f5d848-hjbmc"] Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.239073 4827 scope.go:117] "RemoveContainer" containerID="b20d2f86ea05804700f4777cbe97ce191a4645edded5f4dcbcc633866e8b2536" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.246538 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-69d4f5d848-hjbmc"] Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.259023 4827 scope.go:117] "RemoveContainer" containerID="1361b3a6a3130a7bf2a34329ee1f22ef50673a80037a4802feebfce21bc7579e" Jan 31 04:06:42 crc kubenswrapper[4827]: E0131 04:06:42.259453 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1361b3a6a3130a7bf2a34329ee1f22ef50673a80037a4802feebfce21bc7579e\": container with ID starting with 1361b3a6a3130a7bf2a34329ee1f22ef50673a80037a4802feebfce21bc7579e not found: ID does not exist" containerID="1361b3a6a3130a7bf2a34329ee1f22ef50673a80037a4802feebfce21bc7579e" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.259479 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1361b3a6a3130a7bf2a34329ee1f22ef50673a80037a4802feebfce21bc7579e"} err="failed to get container status \"1361b3a6a3130a7bf2a34329ee1f22ef50673a80037a4802feebfce21bc7579e\": rpc error: code = NotFound desc = could not find container \"1361b3a6a3130a7bf2a34329ee1f22ef50673a80037a4802feebfce21bc7579e\": container with ID starting with 1361b3a6a3130a7bf2a34329ee1f22ef50673a80037a4802feebfce21bc7579e not found: ID does not exist" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.259501 4827 scope.go:117] "RemoveContainer" containerID="b20d2f86ea05804700f4777cbe97ce191a4645edded5f4dcbcc633866e8b2536" Jan 31 04:06:42 crc kubenswrapper[4827]: E0131 04:06:42.259834 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b20d2f86ea05804700f4777cbe97ce191a4645edded5f4dcbcc633866e8b2536\": container with ID starting with b20d2f86ea05804700f4777cbe97ce191a4645edded5f4dcbcc633866e8b2536 not found: ID does not exist" containerID="b20d2f86ea05804700f4777cbe97ce191a4645edded5f4dcbcc633866e8b2536" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.259852 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b20d2f86ea05804700f4777cbe97ce191a4645edded5f4dcbcc633866e8b2536"} err="failed to get container status \"b20d2f86ea05804700f4777cbe97ce191a4645edded5f4dcbcc633866e8b2536\": rpc error: code = NotFound desc = could not find container \"b20d2f86ea05804700f4777cbe97ce191a4645edded5f4dcbcc633866e8b2536\": container with ID starting with b20d2f86ea05804700f4777cbe97ce191a4645edded5f4dcbcc633866e8b2536 not found: ID does not exist" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.489749 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:42 crc kubenswrapper[4827]: W0131 04:06:42.492179 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c70cf0b_7e54_41ae_806b_0109d6780de7.slice/crio-63a275f8a83afdfbb2b3f2d3546092d059163a6809dd47cbb2394ad4dfaa2cad WatchSource:0}: Error finding container 63a275f8a83afdfbb2b3f2d3546092d059163a6809dd47cbb2394ad4dfaa2cad: Status 404 returned error can't find the container with id 63a275f8a83afdfbb2b3f2d3546092d059163a6809dd47cbb2394ad4dfaa2cad Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.649124 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-n7pzh"] Jan 31 04:06:42 crc kubenswrapper[4827]: E0131 04:06:42.650773 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0898c62-7d0f-447a-84b0-7627b4b78457" containerName="placement-log" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.650863 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0898c62-7d0f-447a-84b0-7627b4b78457" containerName="placement-log" Jan 31 04:06:42 crc kubenswrapper[4827]: E0131 04:06:42.650959 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0898c62-7d0f-447a-84b0-7627b4b78457" containerName="placement-api" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.651040 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0898c62-7d0f-447a-84b0-7627b4b78457" containerName="placement-api" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.651290 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0898c62-7d0f-447a-84b0-7627b4b78457" containerName="placement-api" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.651416 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0898c62-7d0f-447a-84b0-7627b4b78457" containerName="placement-log" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.652126 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-n7pzh" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.654921 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.655191 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.655297 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-hp995" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.656599 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-n7pzh"] Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.677349 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/570fdf01-16f4-4a1b-91a8-88b5e9447309-scripts\") pod \"nova-cell0-conductor-db-sync-n7pzh\" (UID: \"570fdf01-16f4-4a1b-91a8-88b5e9447309\") " pod="openstack/nova-cell0-conductor-db-sync-n7pzh" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.677393 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msrqq\" (UniqueName: \"kubernetes.io/projected/570fdf01-16f4-4a1b-91a8-88b5e9447309-kube-api-access-msrqq\") pod \"nova-cell0-conductor-db-sync-n7pzh\" (UID: \"570fdf01-16f4-4a1b-91a8-88b5e9447309\") " pod="openstack/nova-cell0-conductor-db-sync-n7pzh" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.677438 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570fdf01-16f4-4a1b-91a8-88b5e9447309-config-data\") pod \"nova-cell0-conductor-db-sync-n7pzh\" (UID: \"570fdf01-16f4-4a1b-91a8-88b5e9447309\") " pod="openstack/nova-cell0-conductor-db-sync-n7pzh" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.677565 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570fdf01-16f4-4a1b-91a8-88b5e9447309-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-n7pzh\" (UID: \"570fdf01-16f4-4a1b-91a8-88b5e9447309\") " pod="openstack/nova-cell0-conductor-db-sync-n7pzh" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.779195 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570fdf01-16f4-4a1b-91a8-88b5e9447309-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-n7pzh\" (UID: \"570fdf01-16f4-4a1b-91a8-88b5e9447309\") " pod="openstack/nova-cell0-conductor-db-sync-n7pzh" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.779252 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/570fdf01-16f4-4a1b-91a8-88b5e9447309-scripts\") pod \"nova-cell0-conductor-db-sync-n7pzh\" (UID: \"570fdf01-16f4-4a1b-91a8-88b5e9447309\") " pod="openstack/nova-cell0-conductor-db-sync-n7pzh" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.779273 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msrqq\" (UniqueName: \"kubernetes.io/projected/570fdf01-16f4-4a1b-91a8-88b5e9447309-kube-api-access-msrqq\") pod \"nova-cell0-conductor-db-sync-n7pzh\" (UID: \"570fdf01-16f4-4a1b-91a8-88b5e9447309\") " pod="openstack/nova-cell0-conductor-db-sync-n7pzh" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.779314 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570fdf01-16f4-4a1b-91a8-88b5e9447309-config-data\") pod \"nova-cell0-conductor-db-sync-n7pzh\" (UID: \"570fdf01-16f4-4a1b-91a8-88b5e9447309\") " pod="openstack/nova-cell0-conductor-db-sync-n7pzh" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.786651 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570fdf01-16f4-4a1b-91a8-88b5e9447309-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-n7pzh\" (UID: \"570fdf01-16f4-4a1b-91a8-88b5e9447309\") " pod="openstack/nova-cell0-conductor-db-sync-n7pzh" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.789756 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/570fdf01-16f4-4a1b-91a8-88b5e9447309-scripts\") pod \"nova-cell0-conductor-db-sync-n7pzh\" (UID: \"570fdf01-16f4-4a1b-91a8-88b5e9447309\") " pod="openstack/nova-cell0-conductor-db-sync-n7pzh" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.796103 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570fdf01-16f4-4a1b-91a8-88b5e9447309-config-data\") pod \"nova-cell0-conductor-db-sync-n7pzh\" (UID: \"570fdf01-16f4-4a1b-91a8-88b5e9447309\") " pod="openstack/nova-cell0-conductor-db-sync-n7pzh" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.805017 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msrqq\" (UniqueName: \"kubernetes.io/projected/570fdf01-16f4-4a1b-91a8-88b5e9447309-kube-api-access-msrqq\") pod \"nova-cell0-conductor-db-sync-n7pzh\" (UID: \"570fdf01-16f4-4a1b-91a8-88b5e9447309\") " pod="openstack/nova-cell0-conductor-db-sync-n7pzh" Jan 31 04:06:42 crc kubenswrapper[4827]: I0131 04:06:42.993077 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-n7pzh" Jan 31 04:06:43 crc kubenswrapper[4827]: I0131 04:06:43.210745 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c70cf0b-7e54-41ae-806b-0109d6780de7","Type":"ContainerStarted","Data":"e9b3fd692eed24c7f472665bbc610ed24d532870cf7c03ed3f2624b06ffbf257"} Jan 31 04:06:43 crc kubenswrapper[4827]: I0131 04:06:43.211021 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c70cf0b-7e54-41ae-806b-0109d6780de7","Type":"ContainerStarted","Data":"63a275f8a83afdfbb2b3f2d3546092d059163a6809dd47cbb2394ad4dfaa2cad"} Jan 31 04:06:43 crc kubenswrapper[4827]: W0131 04:06:43.478302 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod570fdf01_16f4_4a1b_91a8_88b5e9447309.slice/crio-22b1c779eaf3f61f598ab07fde5e46480e713712aab944bc77b2b0031de7328b WatchSource:0}: Error finding container 22b1c779eaf3f61f598ab07fde5e46480e713712aab944bc77b2b0031de7328b: Status 404 returned error can't find the container with id 22b1c779eaf3f61f598ab07fde5e46480e713712aab944bc77b2b0031de7328b Jan 31 04:06:43 crc kubenswrapper[4827]: I0131 04:06:43.485698 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-n7pzh"] Jan 31 04:06:44 crc kubenswrapper[4827]: I0131 04:06:44.134465 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0898c62-7d0f-447a-84b0-7627b4b78457" path="/var/lib/kubelet/pods/c0898c62-7d0f-447a-84b0-7627b4b78457/volumes" Jan 31 04:06:44 crc kubenswrapper[4827]: I0131 04:06:44.223681 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-n7pzh" event={"ID":"570fdf01-16f4-4a1b-91a8-88b5e9447309","Type":"ContainerStarted","Data":"22b1c779eaf3f61f598ab07fde5e46480e713712aab944bc77b2b0031de7328b"} Jan 31 04:06:44 crc kubenswrapper[4827]: I0131 04:06:44.226599 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c70cf0b-7e54-41ae-806b-0109d6780de7","Type":"ContainerStarted","Data":"96f01035c869850edfe28fab6e37a212298de89f6b88cc69429532ce5e3d6d6c"} Jan 31 04:06:45 crc kubenswrapper[4827]: I0131 04:06:45.236966 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c70cf0b-7e54-41ae-806b-0109d6780de7","Type":"ContainerStarted","Data":"49466a6602a02adde6074a9fa8b2f701805cc12afdd0f6756c57514e6fbe4759"} Jan 31 04:06:52 crc kubenswrapper[4827]: I0131 04:06:52.327622 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-n7pzh" event={"ID":"570fdf01-16f4-4a1b-91a8-88b5e9447309","Type":"ContainerStarted","Data":"de3cf9d90c91f1d425385ec340296bbab81a49f9c3c6b9735c0e676fe068ac3f"} Jan 31 04:06:52 crc kubenswrapper[4827]: I0131 04:06:52.329289 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c70cf0b-7e54-41ae-806b-0109d6780de7","Type":"ContainerStarted","Data":"4c775c37505c1760d93030a67e4fcbca28e7e7e95d23d1057b2b573c9445f24a"} Jan 31 04:06:52 crc kubenswrapper[4827]: I0131 04:06:52.329477 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 04:06:52 crc kubenswrapper[4827]: I0131 04:06:52.349600 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-n7pzh" podStartSLOduration=2.327644574 podStartE2EDuration="10.349583364s" podCreationTimestamp="2026-01-31 04:06:42 +0000 UTC" firstStartedPulling="2026-01-31 04:06:43.484642352 +0000 UTC m=+1196.171722801" lastFinishedPulling="2026-01-31 04:06:51.506581142 +0000 UTC m=+1204.193661591" observedRunningTime="2026-01-31 04:06:52.344777216 +0000 UTC m=+1205.031857655" watchObservedRunningTime="2026-01-31 04:06:52.349583364 +0000 UTC m=+1205.036663813" Jan 31 04:06:52 crc kubenswrapper[4827]: I0131 04:06:52.363016 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.351362986 podStartE2EDuration="11.363000785s" podCreationTimestamp="2026-01-31 04:06:41 +0000 UTC" firstStartedPulling="2026-01-31 04:06:42.494457948 +0000 UTC m=+1195.181538397" lastFinishedPulling="2026-01-31 04:06:51.506095737 +0000 UTC m=+1204.193176196" observedRunningTime="2026-01-31 04:06:52.36182862 +0000 UTC m=+1205.048909059" watchObservedRunningTime="2026-01-31 04:06:52.363000785 +0000 UTC m=+1205.050081234" Jan 31 04:07:06 crc kubenswrapper[4827]: I0131 04:07:06.458960 4827 generic.go:334] "Generic (PLEG): container finished" podID="570fdf01-16f4-4a1b-91a8-88b5e9447309" containerID="de3cf9d90c91f1d425385ec340296bbab81a49f9c3c6b9735c0e676fe068ac3f" exitCode=0 Jan 31 04:07:06 crc kubenswrapper[4827]: I0131 04:07:06.459032 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-n7pzh" event={"ID":"570fdf01-16f4-4a1b-91a8-88b5e9447309","Type":"ContainerDied","Data":"de3cf9d90c91f1d425385ec340296bbab81a49f9c3c6b9735c0e676fe068ac3f"} Jan 31 04:07:07 crc kubenswrapper[4827]: I0131 04:07:07.827063 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-n7pzh" Jan 31 04:07:07 crc kubenswrapper[4827]: I0131 04:07:07.998456 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/570fdf01-16f4-4a1b-91a8-88b5e9447309-scripts\") pod \"570fdf01-16f4-4a1b-91a8-88b5e9447309\" (UID: \"570fdf01-16f4-4a1b-91a8-88b5e9447309\") " Jan 31 04:07:07 crc kubenswrapper[4827]: I0131 04:07:07.998571 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570fdf01-16f4-4a1b-91a8-88b5e9447309-config-data\") pod \"570fdf01-16f4-4a1b-91a8-88b5e9447309\" (UID: \"570fdf01-16f4-4a1b-91a8-88b5e9447309\") " Jan 31 04:07:07 crc kubenswrapper[4827]: I0131 04:07:07.998700 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570fdf01-16f4-4a1b-91a8-88b5e9447309-combined-ca-bundle\") pod \"570fdf01-16f4-4a1b-91a8-88b5e9447309\" (UID: \"570fdf01-16f4-4a1b-91a8-88b5e9447309\") " Jan 31 04:07:07 crc kubenswrapper[4827]: I0131 04:07:07.998777 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msrqq\" (UniqueName: \"kubernetes.io/projected/570fdf01-16f4-4a1b-91a8-88b5e9447309-kube-api-access-msrqq\") pod \"570fdf01-16f4-4a1b-91a8-88b5e9447309\" (UID: \"570fdf01-16f4-4a1b-91a8-88b5e9447309\") " Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.004737 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/570fdf01-16f4-4a1b-91a8-88b5e9447309-scripts" (OuterVolumeSpecName: "scripts") pod "570fdf01-16f4-4a1b-91a8-88b5e9447309" (UID: "570fdf01-16f4-4a1b-91a8-88b5e9447309"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.017291 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/570fdf01-16f4-4a1b-91a8-88b5e9447309-kube-api-access-msrqq" (OuterVolumeSpecName: "kube-api-access-msrqq") pod "570fdf01-16f4-4a1b-91a8-88b5e9447309" (UID: "570fdf01-16f4-4a1b-91a8-88b5e9447309"). InnerVolumeSpecName "kube-api-access-msrqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.030823 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/570fdf01-16f4-4a1b-91a8-88b5e9447309-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "570fdf01-16f4-4a1b-91a8-88b5e9447309" (UID: "570fdf01-16f4-4a1b-91a8-88b5e9447309"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.030913 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/570fdf01-16f4-4a1b-91a8-88b5e9447309-config-data" (OuterVolumeSpecName: "config-data") pod "570fdf01-16f4-4a1b-91a8-88b5e9447309" (UID: "570fdf01-16f4-4a1b-91a8-88b5e9447309"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.101437 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/570fdf01-16f4-4a1b-91a8-88b5e9447309-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.101475 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570fdf01-16f4-4a1b-91a8-88b5e9447309-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.101490 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570fdf01-16f4-4a1b-91a8-88b5e9447309-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.101502 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msrqq\" (UniqueName: \"kubernetes.io/projected/570fdf01-16f4-4a1b-91a8-88b5e9447309-kube-api-access-msrqq\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.477481 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-n7pzh" event={"ID":"570fdf01-16f4-4a1b-91a8-88b5e9447309","Type":"ContainerDied","Data":"22b1c779eaf3f61f598ab07fde5e46480e713712aab944bc77b2b0031de7328b"} Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.477796 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22b1c779eaf3f61f598ab07fde5e46480e713712aab944bc77b2b0031de7328b" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.477612 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-n7pzh" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.664154 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 04:07:08 crc kubenswrapper[4827]: E0131 04:07:08.664496 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="570fdf01-16f4-4a1b-91a8-88b5e9447309" containerName="nova-cell0-conductor-db-sync" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.664514 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="570fdf01-16f4-4a1b-91a8-88b5e9447309" containerName="nova-cell0-conductor-db-sync" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.664667 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="570fdf01-16f4-4a1b-91a8-88b5e9447309" containerName="nova-cell0-conductor-db-sync" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.665205 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.667860 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-hp995" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.673051 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.676899 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.811998 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1af38ab0-fbfd-463c-8349-39b3ca0d7f9e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1af38ab0-fbfd-463c-8349-39b3ca0d7f9e\") " pod="openstack/nova-cell0-conductor-0" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.812120 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af38ab0-fbfd-463c-8349-39b3ca0d7f9e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1af38ab0-fbfd-463c-8349-39b3ca0d7f9e\") " pod="openstack/nova-cell0-conductor-0" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.812161 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g57wl\" (UniqueName: \"kubernetes.io/projected/1af38ab0-fbfd-463c-8349-39b3ca0d7f9e-kube-api-access-g57wl\") pod \"nova-cell0-conductor-0\" (UID: \"1af38ab0-fbfd-463c-8349-39b3ca0d7f9e\") " pod="openstack/nova-cell0-conductor-0" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.913787 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1af38ab0-fbfd-463c-8349-39b3ca0d7f9e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1af38ab0-fbfd-463c-8349-39b3ca0d7f9e\") " pod="openstack/nova-cell0-conductor-0" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.913891 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af38ab0-fbfd-463c-8349-39b3ca0d7f9e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1af38ab0-fbfd-463c-8349-39b3ca0d7f9e\") " pod="openstack/nova-cell0-conductor-0" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.913929 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g57wl\" (UniqueName: \"kubernetes.io/projected/1af38ab0-fbfd-463c-8349-39b3ca0d7f9e-kube-api-access-g57wl\") pod \"nova-cell0-conductor-0\" (UID: \"1af38ab0-fbfd-463c-8349-39b3ca0d7f9e\") " pod="openstack/nova-cell0-conductor-0" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.922792 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af38ab0-fbfd-463c-8349-39b3ca0d7f9e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1af38ab0-fbfd-463c-8349-39b3ca0d7f9e\") " pod="openstack/nova-cell0-conductor-0" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.926777 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1af38ab0-fbfd-463c-8349-39b3ca0d7f9e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1af38ab0-fbfd-463c-8349-39b3ca0d7f9e\") " pod="openstack/nova-cell0-conductor-0" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.934454 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g57wl\" (UniqueName: \"kubernetes.io/projected/1af38ab0-fbfd-463c-8349-39b3ca0d7f9e-kube-api-access-g57wl\") pod \"nova-cell0-conductor-0\" (UID: \"1af38ab0-fbfd-463c-8349-39b3ca0d7f9e\") " pod="openstack/nova-cell0-conductor-0" Jan 31 04:07:08 crc kubenswrapper[4827]: I0131 04:07:08.984662 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 04:07:09 crc kubenswrapper[4827]: I0131 04:07:09.462082 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 04:07:09 crc kubenswrapper[4827]: I0131 04:07:09.493519 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1af38ab0-fbfd-463c-8349-39b3ca0d7f9e","Type":"ContainerStarted","Data":"74734fbf106448874d3e1d2cd2c6a34cacfeaa05aaf48abb52cd41d7d1df9453"} Jan 31 04:07:10 crc kubenswrapper[4827]: I0131 04:07:10.514529 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1af38ab0-fbfd-463c-8349-39b3ca0d7f9e","Type":"ContainerStarted","Data":"408df9fada103f908c2cb5eecb4b220585274089a8948013aac0c5e07390c6ec"} Jan 31 04:07:10 crc kubenswrapper[4827]: I0131 04:07:10.515155 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 31 04:07:10 crc kubenswrapper[4827]: I0131 04:07:10.541184 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.541164014 podStartE2EDuration="2.541164014s" podCreationTimestamp="2026-01-31 04:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:07:10.534568533 +0000 UTC m=+1223.221649002" watchObservedRunningTime="2026-01-31 04:07:10.541164014 +0000 UTC m=+1223.228244463" Jan 31 04:07:12 crc kubenswrapper[4827]: I0131 04:07:12.017549 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 04:07:14 crc kubenswrapper[4827]: I0131 04:07:14.503323 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 04:07:14 crc kubenswrapper[4827]: I0131 04:07:14.503920 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="46ae0722-07b9-42d3-9ca9-d6a07cd0aa12" containerName="kube-state-metrics" containerID="cri-o://48c89a72ab34c2cc54e43adfe24b2018621efb76c90000d5efd07d87c5ed1a44" gracePeriod=30 Jan 31 04:07:14 crc kubenswrapper[4827]: I0131 04:07:14.979019 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.127968 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl7gv\" (UniqueName: \"kubernetes.io/projected/46ae0722-07b9-42d3-9ca9-d6a07cd0aa12-kube-api-access-pl7gv\") pod \"46ae0722-07b9-42d3-9ca9-d6a07cd0aa12\" (UID: \"46ae0722-07b9-42d3-9ca9-d6a07cd0aa12\") " Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.133180 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ae0722-07b9-42d3-9ca9-d6a07cd0aa12-kube-api-access-pl7gv" (OuterVolumeSpecName: "kube-api-access-pl7gv") pod "46ae0722-07b9-42d3-9ca9-d6a07cd0aa12" (UID: "46ae0722-07b9-42d3-9ca9-d6a07cd0aa12"). InnerVolumeSpecName "kube-api-access-pl7gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.231052 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl7gv\" (UniqueName: \"kubernetes.io/projected/46ae0722-07b9-42d3-9ca9-d6a07cd0aa12-kube-api-access-pl7gv\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.496061 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.496352 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c70cf0b-7e54-41ae-806b-0109d6780de7" containerName="ceilometer-central-agent" containerID="cri-o://e9b3fd692eed24c7f472665bbc610ed24d532870cf7c03ed3f2624b06ffbf257" gracePeriod=30 Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.496485 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c70cf0b-7e54-41ae-806b-0109d6780de7" containerName="proxy-httpd" containerID="cri-o://4c775c37505c1760d93030a67e4fcbca28e7e7e95d23d1057b2b573c9445f24a" gracePeriod=30 Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.496580 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c70cf0b-7e54-41ae-806b-0109d6780de7" containerName="sg-core" containerID="cri-o://49466a6602a02adde6074a9fa8b2f701805cc12afdd0f6756c57514e6fbe4759" gracePeriod=30 Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.496540 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c70cf0b-7e54-41ae-806b-0109d6780de7" containerName="ceilometer-notification-agent" containerID="cri-o://96f01035c869850edfe28fab6e37a212298de89f6b88cc69429532ce5e3d6d6c" gracePeriod=30 Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.575681 4827 generic.go:334] "Generic (PLEG): container finished" podID="46ae0722-07b9-42d3-9ca9-d6a07cd0aa12" containerID="48c89a72ab34c2cc54e43adfe24b2018621efb76c90000d5efd07d87c5ed1a44" exitCode=2 Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.575737 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"46ae0722-07b9-42d3-9ca9-d6a07cd0aa12","Type":"ContainerDied","Data":"48c89a72ab34c2cc54e43adfe24b2018621efb76c90000d5efd07d87c5ed1a44"} Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.575759 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.575782 4827 scope.go:117] "RemoveContainer" containerID="48c89a72ab34c2cc54e43adfe24b2018621efb76c90000d5efd07d87c5ed1a44" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.575769 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"46ae0722-07b9-42d3-9ca9-d6a07cd0aa12","Type":"ContainerDied","Data":"d37ebb304da853edf5de57595a7c6030fe346d3e4c07e79f492ae147599fb1c1"} Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.608084 4827 scope.go:117] "RemoveContainer" containerID="48c89a72ab34c2cc54e43adfe24b2018621efb76c90000d5efd07d87c5ed1a44" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.610305 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 04:07:15 crc kubenswrapper[4827]: E0131 04:07:15.610934 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48c89a72ab34c2cc54e43adfe24b2018621efb76c90000d5efd07d87c5ed1a44\": container with ID starting with 48c89a72ab34c2cc54e43adfe24b2018621efb76c90000d5efd07d87c5ed1a44 not found: ID does not exist" containerID="48c89a72ab34c2cc54e43adfe24b2018621efb76c90000d5efd07d87c5ed1a44" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.611022 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48c89a72ab34c2cc54e43adfe24b2018621efb76c90000d5efd07d87c5ed1a44"} err="failed to get container status \"48c89a72ab34c2cc54e43adfe24b2018621efb76c90000d5efd07d87c5ed1a44\": rpc error: code = NotFound desc = could not find container \"48c89a72ab34c2cc54e43adfe24b2018621efb76c90000d5efd07d87c5ed1a44\": container with ID starting with 48c89a72ab34c2cc54e43adfe24b2018621efb76c90000d5efd07d87c5ed1a44 not found: ID does not exist" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.621237 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.630284 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 04:07:15 crc kubenswrapper[4827]: E0131 04:07:15.630919 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ae0722-07b9-42d3-9ca9-d6a07cd0aa12" containerName="kube-state-metrics" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.630936 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ae0722-07b9-42d3-9ca9-d6a07cd0aa12" containerName="kube-state-metrics" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.631165 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ae0722-07b9-42d3-9ca9-d6a07cd0aa12" containerName="kube-state-metrics" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.632038 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.636307 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.641780 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.644264 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.743277 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfcqb\" (UniqueName: \"kubernetes.io/projected/41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a-kube-api-access-qfcqb\") pod \"kube-state-metrics-0\" (UID: \"41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a\") " pod="openstack/kube-state-metrics-0" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.743671 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a\") " pod="openstack/kube-state-metrics-0" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.743766 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a\") " pod="openstack/kube-state-metrics-0" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.743945 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a\") " pod="openstack/kube-state-metrics-0" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.845455 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfcqb\" (UniqueName: \"kubernetes.io/projected/41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a-kube-api-access-qfcqb\") pod \"kube-state-metrics-0\" (UID: \"41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a\") " pod="openstack/kube-state-metrics-0" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.845715 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a\") " pod="openstack/kube-state-metrics-0" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.845805 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a\") " pod="openstack/kube-state-metrics-0" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.845978 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a\") " pod="openstack/kube-state-metrics-0" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.850158 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a\") " pod="openstack/kube-state-metrics-0" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.857559 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a\") " pod="openstack/kube-state-metrics-0" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.860624 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a\") " pod="openstack/kube-state-metrics-0" Jan 31 04:07:15 crc kubenswrapper[4827]: I0131 04:07:15.880265 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfcqb\" (UniqueName: \"kubernetes.io/projected/41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a-kube-api-access-qfcqb\") pod \"kube-state-metrics-0\" (UID: \"41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a\") " pod="openstack/kube-state-metrics-0" Jan 31 04:07:16 crc kubenswrapper[4827]: I0131 04:07:16.010057 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 04:07:16 crc kubenswrapper[4827]: I0131 04:07:16.131217 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46ae0722-07b9-42d3-9ca9-d6a07cd0aa12" path="/var/lib/kubelet/pods/46ae0722-07b9-42d3-9ca9-d6a07cd0aa12/volumes" Jan 31 04:07:16 crc kubenswrapper[4827]: I0131 04:07:16.585296 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 04:07:16 crc kubenswrapper[4827]: I0131 04:07:16.588421 4827 generic.go:334] "Generic (PLEG): container finished" podID="0c70cf0b-7e54-41ae-806b-0109d6780de7" containerID="4c775c37505c1760d93030a67e4fcbca28e7e7e95d23d1057b2b573c9445f24a" exitCode=0 Jan 31 04:07:16 crc kubenswrapper[4827]: I0131 04:07:16.588460 4827 generic.go:334] "Generic (PLEG): container finished" podID="0c70cf0b-7e54-41ae-806b-0109d6780de7" containerID="49466a6602a02adde6074a9fa8b2f701805cc12afdd0f6756c57514e6fbe4759" exitCode=2 Jan 31 04:07:16 crc kubenswrapper[4827]: I0131 04:07:16.588470 4827 generic.go:334] "Generic (PLEG): container finished" podID="0c70cf0b-7e54-41ae-806b-0109d6780de7" containerID="e9b3fd692eed24c7f472665bbc610ed24d532870cf7c03ed3f2624b06ffbf257" exitCode=0 Jan 31 04:07:16 crc kubenswrapper[4827]: I0131 04:07:16.588517 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c70cf0b-7e54-41ae-806b-0109d6780de7","Type":"ContainerDied","Data":"4c775c37505c1760d93030a67e4fcbca28e7e7e95d23d1057b2b573c9445f24a"} Jan 31 04:07:16 crc kubenswrapper[4827]: I0131 04:07:16.588576 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c70cf0b-7e54-41ae-806b-0109d6780de7","Type":"ContainerDied","Data":"49466a6602a02adde6074a9fa8b2f701805cc12afdd0f6756c57514e6fbe4759"} Jan 31 04:07:16 crc kubenswrapper[4827]: I0131 04:07:16.588588 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c70cf0b-7e54-41ae-806b-0109d6780de7","Type":"ContainerDied","Data":"e9b3fd692eed24c7f472665bbc610ed24d532870cf7c03ed3f2624b06ffbf257"} Jan 31 04:07:16 crc kubenswrapper[4827]: W0131 04:07:16.591443 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41b5f4d3_c8f6_44ee_8edf_dc49c5b9698a.slice/crio-080abf79f1d4bdd11160856af7331dca2e82347d59b76704ccf293eda253584a WatchSource:0}: Error finding container 080abf79f1d4bdd11160856af7331dca2e82347d59b76704ccf293eda253584a: Status 404 returned error can't find the container with id 080abf79f1d4bdd11160856af7331dca2e82347d59b76704ccf293eda253584a Jan 31 04:07:17 crc kubenswrapper[4827]: I0131 04:07:17.596719 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a","Type":"ContainerStarted","Data":"9d9a9cd63dc1a985f3290d0b14c10e56cec0604183916aca047644589dc2ca71"} Jan 31 04:07:17 crc kubenswrapper[4827]: I0131 04:07:17.597037 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a","Type":"ContainerStarted","Data":"080abf79f1d4bdd11160856af7331dca2e82347d59b76704ccf293eda253584a"} Jan 31 04:07:17 crc kubenswrapper[4827]: I0131 04:07:17.597059 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 31 04:07:17 crc kubenswrapper[4827]: I0131 04:07:17.611502 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.274794892 podStartE2EDuration="2.611485909s" podCreationTimestamp="2026-01-31 04:07:15 +0000 UTC" firstStartedPulling="2026-01-31 04:07:16.593555096 +0000 UTC m=+1229.280635545" lastFinishedPulling="2026-01-31 04:07:16.930246113 +0000 UTC m=+1229.617326562" observedRunningTime="2026-01-31 04:07:17.610102986 +0000 UTC m=+1230.297183435" watchObservedRunningTime="2026-01-31 04:07:17.611485909 +0000 UTC m=+1230.298566358" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.014182 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.463550 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-4ggj8"] Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.464684 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4ggj8" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.469393 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.478844 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.491600 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4ggj8"] Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.607178 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-scripts\") pod \"nova-cell0-cell-mapping-4ggj8\" (UID: \"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb\") " pod="openstack/nova-cell0-cell-mapping-4ggj8" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.607479 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4ggj8\" (UID: \"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb\") " pod="openstack/nova-cell0-cell-mapping-4ggj8" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.607606 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-config-data\") pod \"nova-cell0-cell-mapping-4ggj8\" (UID: \"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb\") " pod="openstack/nova-cell0-cell-mapping-4ggj8" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.607626 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht5x4\" (UniqueName: \"kubernetes.io/projected/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-kube-api-access-ht5x4\") pod \"nova-cell0-cell-mapping-4ggj8\" (UID: \"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb\") " pod="openstack/nova-cell0-cell-mapping-4ggj8" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.641228 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.642636 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.653947 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.654797 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.696453 4827 generic.go:334] "Generic (PLEG): container finished" podID="0c70cf0b-7e54-41ae-806b-0109d6780de7" containerID="96f01035c869850edfe28fab6e37a212298de89f6b88cc69429532ce5e3d6d6c" exitCode=0 Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.696495 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c70cf0b-7e54-41ae-806b-0109d6780de7","Type":"ContainerDied","Data":"96f01035c869850edfe28fab6e37a212298de89f6b88cc69429532ce5e3d6d6c"} Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.708858 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-config-data\") pod \"nova-cell0-cell-mapping-4ggj8\" (UID: \"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb\") " pod="openstack/nova-cell0-cell-mapping-4ggj8" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.708937 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht5x4\" (UniqueName: \"kubernetes.io/projected/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-kube-api-access-ht5x4\") pod \"nova-cell0-cell-mapping-4ggj8\" (UID: \"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb\") " pod="openstack/nova-cell0-cell-mapping-4ggj8" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.709030 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-scripts\") pod \"nova-cell0-cell-mapping-4ggj8\" (UID: \"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb\") " pod="openstack/nova-cell0-cell-mapping-4ggj8" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.709051 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4ggj8\" (UID: \"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb\") " pod="openstack/nova-cell0-cell-mapping-4ggj8" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.716800 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4ggj8\" (UID: \"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb\") " pod="openstack/nova-cell0-cell-mapping-4ggj8" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.719632 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-config-data\") pod \"nova-cell0-cell-mapping-4ggj8\" (UID: \"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb\") " pod="openstack/nova-cell0-cell-mapping-4ggj8" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.744953 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.746256 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.745052 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-scripts\") pod \"nova-cell0-cell-mapping-4ggj8\" (UID: \"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb\") " pod="openstack/nova-cell0-cell-mapping-4ggj8" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.750577 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht5x4\" (UniqueName: \"kubernetes.io/projected/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-kube-api-access-ht5x4\") pod \"nova-cell0-cell-mapping-4ggj8\" (UID: \"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb\") " pod="openstack/nova-cell0-cell-mapping-4ggj8" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.753328 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.772018 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.811236 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-config-data\") pod \"nova-api-0\" (UID: \"929e59f8-3e14-46cc-bbaf-fda67b6b9d80\") " pod="openstack/nova-api-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.811296 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"929e59f8-3e14-46cc-bbaf-fda67b6b9d80\") " pod="openstack/nova-api-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.811352 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgwlf\" (UniqueName: \"kubernetes.io/projected/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-kube-api-access-zgwlf\") pod \"nova-api-0\" (UID: \"929e59f8-3e14-46cc-bbaf-fda67b6b9d80\") " pod="openstack/nova-api-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.811420 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-logs\") pod \"nova-api-0\" (UID: \"929e59f8-3e14-46cc-bbaf-fda67b6b9d80\") " pod="openstack/nova-api-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.815187 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.816188 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.826134 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.869788 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.873787 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4ggj8" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.916589 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"929e59f8-3e14-46cc-bbaf-fda67b6b9d80\") " pod="openstack/nova-api-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.916666 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76102370-40f5-4616-9298-f2e4a0fb668e-config-data\") pod \"nova-scheduler-0\" (UID: \"76102370-40f5-4616-9298-f2e4a0fb668e\") " pod="openstack/nova-scheduler-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.916695 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgwlf\" (UniqueName: \"kubernetes.io/projected/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-kube-api-access-zgwlf\") pod \"nova-api-0\" (UID: \"929e59f8-3e14-46cc-bbaf-fda67b6b9d80\") " pod="openstack/nova-api-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.916709 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/727577e7-9491-4e91-915f-359cd9b7f0be-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"727577e7-9491-4e91-915f-359cd9b7f0be\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.916732 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxqwh\" (UniqueName: \"kubernetes.io/projected/76102370-40f5-4616-9298-f2e4a0fb668e-kube-api-access-rxqwh\") pod \"nova-scheduler-0\" (UID: \"76102370-40f5-4616-9298-f2e4a0fb668e\") " pod="openstack/nova-scheduler-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.916772 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-logs\") pod \"nova-api-0\" (UID: \"929e59f8-3e14-46cc-bbaf-fda67b6b9d80\") " pod="openstack/nova-api-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.916793 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727577e7-9491-4e91-915f-359cd9b7f0be-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"727577e7-9491-4e91-915f-359cd9b7f0be\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.916841 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-286f2\" (UniqueName: \"kubernetes.io/projected/727577e7-9491-4e91-915f-359cd9b7f0be-kube-api-access-286f2\") pod \"nova-cell1-novncproxy-0\" (UID: \"727577e7-9491-4e91-915f-359cd9b7f0be\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.916861 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-config-data\") pod \"nova-api-0\" (UID: \"929e59f8-3e14-46cc-bbaf-fda67b6b9d80\") " pod="openstack/nova-api-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.916901 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76102370-40f5-4616-9298-f2e4a0fb668e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"76102370-40f5-4616-9298-f2e4a0fb668e\") " pod="openstack/nova-scheduler-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.917402 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-logs\") pod \"nova-api-0\" (UID: \"929e59f8-3e14-46cc-bbaf-fda67b6b9d80\") " pod="openstack/nova-api-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.934690 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"929e59f8-3e14-46cc-bbaf-fda67b6b9d80\") " pod="openstack/nova-api-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.960768 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgwlf\" (UniqueName: \"kubernetes.io/projected/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-kube-api-access-zgwlf\") pod \"nova-api-0\" (UID: \"929e59f8-3e14-46cc-bbaf-fda67b6b9d80\") " pod="openstack/nova-api-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.984053 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-config-data\") pod \"nova-api-0\" (UID: \"929e59f8-3e14-46cc-bbaf-fda67b6b9d80\") " pod="openstack/nova-api-0" Jan 31 04:07:19 crc kubenswrapper[4827]: I0131 04:07:19.994290 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.018868 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-286f2\" (UniqueName: \"kubernetes.io/projected/727577e7-9491-4e91-915f-359cd9b7f0be-kube-api-access-286f2\") pod \"nova-cell1-novncproxy-0\" (UID: \"727577e7-9491-4e91-915f-359cd9b7f0be\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.018943 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76102370-40f5-4616-9298-f2e4a0fb668e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"76102370-40f5-4616-9298-f2e4a0fb668e\") " pod="openstack/nova-scheduler-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.019006 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76102370-40f5-4616-9298-f2e4a0fb668e-config-data\") pod \"nova-scheduler-0\" (UID: \"76102370-40f5-4616-9298-f2e4a0fb668e\") " pod="openstack/nova-scheduler-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.019034 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/727577e7-9491-4e91-915f-359cd9b7f0be-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"727577e7-9491-4e91-915f-359cd9b7f0be\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.019053 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxqwh\" (UniqueName: \"kubernetes.io/projected/76102370-40f5-4616-9298-f2e4a0fb668e-kube-api-access-rxqwh\") pod \"nova-scheduler-0\" (UID: \"76102370-40f5-4616-9298-f2e4a0fb668e\") " pod="openstack/nova-scheduler-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.019093 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727577e7-9491-4e91-915f-359cd9b7f0be-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"727577e7-9491-4e91-915f-359cd9b7f0be\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.031250 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727577e7-9491-4e91-915f-359cd9b7f0be-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"727577e7-9491-4e91-915f-359cd9b7f0be\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.032456 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76102370-40f5-4616-9298-f2e4a0fb668e-config-data\") pod \"nova-scheduler-0\" (UID: \"76102370-40f5-4616-9298-f2e4a0fb668e\") " pod="openstack/nova-scheduler-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.036018 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/727577e7-9491-4e91-915f-359cd9b7f0be-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"727577e7-9491-4e91-915f-359cd9b7f0be\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.037370 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76102370-40f5-4616-9298-f2e4a0fb668e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"76102370-40f5-4616-9298-f2e4a0fb668e\") " pod="openstack/nova-scheduler-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.073335 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-286f2\" (UniqueName: \"kubernetes.io/projected/727577e7-9491-4e91-915f-359cd9b7f0be-kube-api-access-286f2\") pod \"nova-cell1-novncproxy-0\" (UID: \"727577e7-9491-4e91-915f-359cd9b7f0be\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.084653 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.086105 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.091732 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxqwh\" (UniqueName: \"kubernetes.io/projected/76102370-40f5-4616-9298-f2e4a0fb668e-kube-api-access-rxqwh\") pod \"nova-scheduler-0\" (UID: \"76102370-40f5-4616-9298-f2e4a0fb668e\") " pod="openstack/nova-scheduler-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.094149 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.094207 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.149987 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.184061 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.196447 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-24dd9"] Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.197845 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-24dd9" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.223004 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/457c9c4f-0d32-47db-8d97-c28694c89936-config-data\") pod \"nova-metadata-0\" (UID: \"457c9c4f-0d32-47db-8d97-c28694c89936\") " pod="openstack/nova-metadata-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.223061 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/457c9c4f-0d32-47db-8d97-c28694c89936-logs\") pod \"nova-metadata-0\" (UID: \"457c9c4f-0d32-47db-8d97-c28694c89936\") " pod="openstack/nova-metadata-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.223084 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457c9c4f-0d32-47db-8d97-c28694c89936-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"457c9c4f-0d32-47db-8d97-c28694c89936\") " pod="openstack/nova-metadata-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.223128 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j292l\" (UniqueName: \"kubernetes.io/projected/457c9c4f-0d32-47db-8d97-c28694c89936-kube-api-access-j292l\") pod \"nova-metadata-0\" (UID: \"457c9c4f-0d32-47db-8d97-c28694c89936\") " pod="openstack/nova-metadata-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.228177 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-24dd9"] Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.235221 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.324759 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-config-data\") pod \"0c70cf0b-7e54-41ae-806b-0109d6780de7\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.325178 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c70cf0b-7e54-41ae-806b-0109d6780de7-log-httpd\") pod \"0c70cf0b-7e54-41ae-806b-0109d6780de7\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.325204 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-sg-core-conf-yaml\") pod \"0c70cf0b-7e54-41ae-806b-0109d6780de7\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.325239 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-scripts\") pod \"0c70cf0b-7e54-41ae-806b-0109d6780de7\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.325279 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-combined-ca-bundle\") pod \"0c70cf0b-7e54-41ae-806b-0109d6780de7\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.325307 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24w4n\" (UniqueName: \"kubernetes.io/projected/0c70cf0b-7e54-41ae-806b-0109d6780de7-kube-api-access-24w4n\") pod \"0c70cf0b-7e54-41ae-806b-0109d6780de7\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.325434 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c70cf0b-7e54-41ae-806b-0109d6780de7-run-httpd\") pod \"0c70cf0b-7e54-41ae-806b-0109d6780de7\" (UID: \"0c70cf0b-7e54-41ae-806b-0109d6780de7\") " Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.325705 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c70cf0b-7e54-41ae-806b-0109d6780de7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0c70cf0b-7e54-41ae-806b-0109d6780de7" (UID: "0c70cf0b-7e54-41ae-806b-0109d6780de7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.326428 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c70cf0b-7e54-41ae-806b-0109d6780de7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0c70cf0b-7e54-41ae-806b-0109d6780de7" (UID: "0c70cf0b-7e54-41ae-806b-0109d6780de7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.329006 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j292l\" (UniqueName: \"kubernetes.io/projected/457c9c4f-0d32-47db-8d97-c28694c89936-kube-api-access-j292l\") pod \"nova-metadata-0\" (UID: \"457c9c4f-0d32-47db-8d97-c28694c89936\") " pod="openstack/nova-metadata-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.329106 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-24dd9\" (UID: \"477674ec-e729-48e8-801c-497f49e9a6c8\") " pod="openstack/dnsmasq-dns-566b5b7845-24dd9" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.329223 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhhhm\" (UniqueName: \"kubernetes.io/projected/477674ec-e729-48e8-801c-497f49e9a6c8-kube-api-access-qhhhm\") pod \"dnsmasq-dns-566b5b7845-24dd9\" (UID: \"477674ec-e729-48e8-801c-497f49e9a6c8\") " pod="openstack/dnsmasq-dns-566b5b7845-24dd9" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.329285 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-dns-svc\") pod \"dnsmasq-dns-566b5b7845-24dd9\" (UID: \"477674ec-e729-48e8-801c-497f49e9a6c8\") " pod="openstack/dnsmasq-dns-566b5b7845-24dd9" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.329367 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/457c9c4f-0d32-47db-8d97-c28694c89936-config-data\") pod \"nova-metadata-0\" (UID: \"457c9c4f-0d32-47db-8d97-c28694c89936\") " pod="openstack/nova-metadata-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.329417 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-config\") pod \"dnsmasq-dns-566b5b7845-24dd9\" (UID: \"477674ec-e729-48e8-801c-497f49e9a6c8\") " pod="openstack/dnsmasq-dns-566b5b7845-24dd9" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.329468 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/457c9c4f-0d32-47db-8d97-c28694c89936-logs\") pod \"nova-metadata-0\" (UID: \"457c9c4f-0d32-47db-8d97-c28694c89936\") " pod="openstack/nova-metadata-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.329502 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457c9c4f-0d32-47db-8d97-c28694c89936-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"457c9c4f-0d32-47db-8d97-c28694c89936\") " pod="openstack/nova-metadata-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.329530 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-24dd9\" (UID: \"477674ec-e729-48e8-801c-497f49e9a6c8\") " pod="openstack/dnsmasq-dns-566b5b7845-24dd9" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.329596 4827 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c70cf0b-7e54-41ae-806b-0109d6780de7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.329606 4827 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c70cf0b-7e54-41ae-806b-0109d6780de7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.331535 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/457c9c4f-0d32-47db-8d97-c28694c89936-logs\") pod \"nova-metadata-0\" (UID: \"457c9c4f-0d32-47db-8d97-c28694c89936\") " pod="openstack/nova-metadata-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.332679 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-scripts" (OuterVolumeSpecName: "scripts") pod "0c70cf0b-7e54-41ae-806b-0109d6780de7" (UID: "0c70cf0b-7e54-41ae-806b-0109d6780de7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.333534 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/457c9c4f-0d32-47db-8d97-c28694c89936-config-data\") pod \"nova-metadata-0\" (UID: \"457c9c4f-0d32-47db-8d97-c28694c89936\") " pod="openstack/nova-metadata-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.336447 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457c9c4f-0d32-47db-8d97-c28694c89936-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"457c9c4f-0d32-47db-8d97-c28694c89936\") " pod="openstack/nova-metadata-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.339149 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c70cf0b-7e54-41ae-806b-0109d6780de7-kube-api-access-24w4n" (OuterVolumeSpecName: "kube-api-access-24w4n") pod "0c70cf0b-7e54-41ae-806b-0109d6780de7" (UID: "0c70cf0b-7e54-41ae-806b-0109d6780de7"). InnerVolumeSpecName "kube-api-access-24w4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.350305 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j292l\" (UniqueName: \"kubernetes.io/projected/457c9c4f-0d32-47db-8d97-c28694c89936-kube-api-access-j292l\") pod \"nova-metadata-0\" (UID: \"457c9c4f-0d32-47db-8d97-c28694c89936\") " pod="openstack/nova-metadata-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.368079 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0c70cf0b-7e54-41ae-806b-0109d6780de7" (UID: "0c70cf0b-7e54-41ae-806b-0109d6780de7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.430685 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c70cf0b-7e54-41ae-806b-0109d6780de7" (UID: "0c70cf0b-7e54-41ae-806b-0109d6780de7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.431382 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhhhm\" (UniqueName: \"kubernetes.io/projected/477674ec-e729-48e8-801c-497f49e9a6c8-kube-api-access-qhhhm\") pod \"dnsmasq-dns-566b5b7845-24dd9\" (UID: \"477674ec-e729-48e8-801c-497f49e9a6c8\") " pod="openstack/dnsmasq-dns-566b5b7845-24dd9" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.431461 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-dns-svc\") pod \"dnsmasq-dns-566b5b7845-24dd9\" (UID: \"477674ec-e729-48e8-801c-497f49e9a6c8\") " pod="openstack/dnsmasq-dns-566b5b7845-24dd9" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.431509 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-config\") pod \"dnsmasq-dns-566b5b7845-24dd9\" (UID: \"477674ec-e729-48e8-801c-497f49e9a6c8\") " pod="openstack/dnsmasq-dns-566b5b7845-24dd9" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.431543 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-24dd9\" (UID: \"477674ec-e729-48e8-801c-497f49e9a6c8\") " pod="openstack/dnsmasq-dns-566b5b7845-24dd9" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.431599 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-24dd9\" (UID: \"477674ec-e729-48e8-801c-497f49e9a6c8\") " pod="openstack/dnsmasq-dns-566b5b7845-24dd9" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.431652 4827 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.431663 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.431672 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.431680 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24w4n\" (UniqueName: \"kubernetes.io/projected/0c70cf0b-7e54-41ae-806b-0109d6780de7-kube-api-access-24w4n\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.432443 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-24dd9\" (UID: \"477674ec-e729-48e8-801c-497f49e9a6c8\") " pod="openstack/dnsmasq-dns-566b5b7845-24dd9" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.434122 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-dns-svc\") pod \"dnsmasq-dns-566b5b7845-24dd9\" (UID: \"477674ec-e729-48e8-801c-497f49e9a6c8\") " pod="openstack/dnsmasq-dns-566b5b7845-24dd9" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.434808 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-24dd9\" (UID: \"477674ec-e729-48e8-801c-497f49e9a6c8\") " pod="openstack/dnsmasq-dns-566b5b7845-24dd9" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.435368 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-config\") pod \"dnsmasq-dns-566b5b7845-24dd9\" (UID: \"477674ec-e729-48e8-801c-497f49e9a6c8\") " pod="openstack/dnsmasq-dns-566b5b7845-24dd9" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.444692 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.451727 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhhhm\" (UniqueName: \"kubernetes.io/projected/477674ec-e729-48e8-801c-497f49e9a6c8-kube-api-access-qhhhm\") pod \"dnsmasq-dns-566b5b7845-24dd9\" (UID: \"477674ec-e729-48e8-801c-497f49e9a6c8\") " pod="openstack/dnsmasq-dns-566b5b7845-24dd9" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.494858 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-config-data" (OuterVolumeSpecName: "config-data") pod "0c70cf0b-7e54-41ae-806b-0109d6780de7" (UID: "0c70cf0b-7e54-41ae-806b-0109d6780de7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.539199 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c70cf0b-7e54-41ae-806b-0109d6780de7-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.558094 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-24dd9" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.712268 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c70cf0b-7e54-41ae-806b-0109d6780de7","Type":"ContainerDied","Data":"63a275f8a83afdfbb2b3f2d3546092d059163a6809dd47cbb2394ad4dfaa2cad"} Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.712324 4827 scope.go:117] "RemoveContainer" containerID="4c775c37505c1760d93030a67e4fcbca28e7e7e95d23d1057b2b573c9445f24a" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.712456 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.714959 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.725714 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:07:20 crc kubenswrapper[4827]: W0131 04:07:20.727168 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76102370_40f5_4616_9298_f2e4a0fb668e.slice/crio-508ef637a79d21edeba17aa397dc604a9648e9cc6c15431866be777a0625ffbe WatchSource:0}: Error finding container 508ef637a79d21edeba17aa397dc604a9648e9cc6c15431866be777a0625ffbe: Status 404 returned error can't find the container with id 508ef637a79d21edeba17aa397dc604a9648e9cc6c15431866be777a0625ffbe Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.791503 4827 scope.go:117] "RemoveContainer" containerID="49466a6602a02adde6074a9fa8b2f701805cc12afdd0f6756c57514e6fbe4759" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.815504 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.816760 4827 scope.go:117] "RemoveContainer" containerID="96f01035c869850edfe28fab6e37a212298de89f6b88cc69429532ce5e3d6d6c" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.847273 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.866679 4827 scope.go:117] "RemoveContainer" containerID="e9b3fd692eed24c7f472665bbc610ed24d532870cf7c03ed3f2624b06ffbf257" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.877959 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:07:20 crc kubenswrapper[4827]: E0131 04:07:20.878363 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c70cf0b-7e54-41ae-806b-0109d6780de7" containerName="ceilometer-central-agent" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.878382 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c70cf0b-7e54-41ae-806b-0109d6780de7" containerName="ceilometer-central-agent" Jan 31 04:07:20 crc kubenswrapper[4827]: E0131 04:07:20.878408 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c70cf0b-7e54-41ae-806b-0109d6780de7" containerName="ceilometer-notification-agent" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.878415 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c70cf0b-7e54-41ae-806b-0109d6780de7" containerName="ceilometer-notification-agent" Jan 31 04:07:20 crc kubenswrapper[4827]: E0131 04:07:20.878422 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c70cf0b-7e54-41ae-806b-0109d6780de7" containerName="sg-core" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.878428 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c70cf0b-7e54-41ae-806b-0109d6780de7" containerName="sg-core" Jan 31 04:07:20 crc kubenswrapper[4827]: E0131 04:07:20.878437 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c70cf0b-7e54-41ae-806b-0109d6780de7" containerName="proxy-httpd" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.878444 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c70cf0b-7e54-41ae-806b-0109d6780de7" containerName="proxy-httpd" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.878582 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c70cf0b-7e54-41ae-806b-0109d6780de7" containerName="proxy-httpd" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.878596 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c70cf0b-7e54-41ae-806b-0109d6780de7" containerName="ceilometer-notification-agent" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.878606 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c70cf0b-7e54-41ae-806b-0109d6780de7" containerName="ceilometer-central-agent" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.878615 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c70cf0b-7e54-41ae-806b-0109d6780de7" containerName="sg-core" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.880091 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.885325 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4ggj8"] Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.889678 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.890423 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.891096 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.912032 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.968832 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-config-data\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.968873 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-scripts\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.968926 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.968967 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.968984 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.969001 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96f29ca8-8584-4f9f-9a5a-f026a8772c07-run-httpd\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.969021 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96f29ca8-8584-4f9f-9a5a-f026a8772c07-log-httpd\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:20 crc kubenswrapper[4827]: I0131 04:07:20.969081 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p88kv\" (UniqueName: \"kubernetes.io/projected/96f29ca8-8584-4f9f-9a5a-f026a8772c07-kube-api-access-p88kv\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.010643 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.041807 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dhwln"] Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.042848 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dhwln" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.048921 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.048978 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.057672 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dhwln"] Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.070400 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-scripts\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.071229 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.071421 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.071535 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.071604 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96f29ca8-8584-4f9f-9a5a-f026a8772c07-run-httpd\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.071678 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96f29ca8-8584-4f9f-9a5a-f026a8772c07-log-httpd\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.071847 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p88kv\" (UniqueName: \"kubernetes.io/projected/96f29ca8-8584-4f9f-9a5a-f026a8772c07-kube-api-access-p88kv\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.072968 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96f29ca8-8584-4f9f-9a5a-f026a8772c07-log-httpd\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.072570 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96f29ca8-8584-4f9f-9a5a-f026a8772c07-run-httpd\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.073303 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-config-data\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.075589 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.076696 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-scripts\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.082719 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-config-data\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.085905 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.091722 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.092035 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p88kv\" (UniqueName: \"kubernetes.io/projected/96f29ca8-8584-4f9f-9a5a-f026a8772c07-kube-api-access-p88kv\") pod \"ceilometer-0\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " pod="openstack/ceilometer-0" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.135297 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-24dd9"] Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.155541 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:07:21 crc kubenswrapper[4827]: W0131 04:07:21.162512 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod457c9c4f_0d32_47db_8d97_c28694c89936.slice/crio-293aca025c72fb756b03daa8d5b8f36c018191d59fe2edf5977e8de631914647 WatchSource:0}: Error finding container 293aca025c72fb756b03daa8d5b8f36c018191d59fe2edf5977e8de631914647: Status 404 returned error can't find the container with id 293aca025c72fb756b03daa8d5b8f36c018191d59fe2edf5977e8de631914647 Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.175296 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9pww\" (UniqueName: \"kubernetes.io/projected/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-kube-api-access-w9pww\") pod \"nova-cell1-conductor-db-sync-dhwln\" (UID: \"455ea1a0-7a10-4f2c-ae49-9a52d3e72771\") " pod="openstack/nova-cell1-conductor-db-sync-dhwln" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.175542 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-scripts\") pod \"nova-cell1-conductor-db-sync-dhwln\" (UID: \"455ea1a0-7a10-4f2c-ae49-9a52d3e72771\") " pod="openstack/nova-cell1-conductor-db-sync-dhwln" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.175612 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dhwln\" (UID: \"455ea1a0-7a10-4f2c-ae49-9a52d3e72771\") " pod="openstack/nova-cell1-conductor-db-sync-dhwln" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.175679 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-config-data\") pod \"nova-cell1-conductor-db-sync-dhwln\" (UID: \"455ea1a0-7a10-4f2c-ae49-9a52d3e72771\") " pod="openstack/nova-cell1-conductor-db-sync-dhwln" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.203659 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.277575 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dhwln\" (UID: \"455ea1a0-7a10-4f2c-ae49-9a52d3e72771\") " pod="openstack/nova-cell1-conductor-db-sync-dhwln" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.277672 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-config-data\") pod \"nova-cell1-conductor-db-sync-dhwln\" (UID: \"455ea1a0-7a10-4f2c-ae49-9a52d3e72771\") " pod="openstack/nova-cell1-conductor-db-sync-dhwln" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.277756 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9pww\" (UniqueName: \"kubernetes.io/projected/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-kube-api-access-w9pww\") pod \"nova-cell1-conductor-db-sync-dhwln\" (UID: \"455ea1a0-7a10-4f2c-ae49-9a52d3e72771\") " pod="openstack/nova-cell1-conductor-db-sync-dhwln" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.277789 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-scripts\") pod \"nova-cell1-conductor-db-sync-dhwln\" (UID: \"455ea1a0-7a10-4f2c-ae49-9a52d3e72771\") " pod="openstack/nova-cell1-conductor-db-sync-dhwln" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.293689 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-scripts\") pod \"nova-cell1-conductor-db-sync-dhwln\" (UID: \"455ea1a0-7a10-4f2c-ae49-9a52d3e72771\") " pod="openstack/nova-cell1-conductor-db-sync-dhwln" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.294076 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-config-data\") pod \"nova-cell1-conductor-db-sync-dhwln\" (UID: \"455ea1a0-7a10-4f2c-ae49-9a52d3e72771\") " pod="openstack/nova-cell1-conductor-db-sync-dhwln" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.295435 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dhwln\" (UID: \"455ea1a0-7a10-4f2c-ae49-9a52d3e72771\") " pod="openstack/nova-cell1-conductor-db-sync-dhwln" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.300344 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9pww\" (UniqueName: \"kubernetes.io/projected/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-kube-api-access-w9pww\") pod \"nova-cell1-conductor-db-sync-dhwln\" (UID: \"455ea1a0-7a10-4f2c-ae49-9a52d3e72771\") " pod="openstack/nova-cell1-conductor-db-sync-dhwln" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.383084 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dhwln" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.644300 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.722156 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"929e59f8-3e14-46cc-bbaf-fda67b6b9d80","Type":"ContainerStarted","Data":"6e79cc287c6a4b42eb94094ecb14aab7ae275b40e518a9f271829e378bb513e3"} Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.723316 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96f29ca8-8584-4f9f-9a5a-f026a8772c07","Type":"ContainerStarted","Data":"46a949499bcc6660577fc13295ef9381290b9142da52b3753d0f3bb6dc21fade"} Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.724438 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4ggj8" event={"ID":"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb","Type":"ContainerStarted","Data":"019c6ed39d7653912087b16a1b28c8c5ff694ea01e2675fba3ee41184a2d78bc"} Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.724458 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4ggj8" event={"ID":"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb","Type":"ContainerStarted","Data":"85d7fda53aef33be457a9a558a099ec0e5388005d482058eed2e551692d7ace4"} Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.741866 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"76102370-40f5-4616-9298-f2e4a0fb668e","Type":"ContainerStarted","Data":"508ef637a79d21edeba17aa397dc604a9648e9cc6c15431866be777a0625ffbe"} Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.744781 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"457c9c4f-0d32-47db-8d97-c28694c89936","Type":"ContainerStarted","Data":"293aca025c72fb756b03daa8d5b8f36c018191d59fe2edf5977e8de631914647"} Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.746839 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"727577e7-9491-4e91-915f-359cd9b7f0be","Type":"ContainerStarted","Data":"8583dc7b3b3b1f213d64ae2796332557793015ffbae783e8b95da9dd791f0638"} Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.748178 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-4ggj8" podStartSLOduration=2.748155521 podStartE2EDuration="2.748155521s" podCreationTimestamp="2026-01-31 04:07:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:07:21.742128955 +0000 UTC m=+1234.429209404" watchObservedRunningTime="2026-01-31 04:07:21.748155521 +0000 UTC m=+1234.435235960" Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.751572 4827 generic.go:334] "Generic (PLEG): container finished" podID="477674ec-e729-48e8-801c-497f49e9a6c8" containerID="41842c647403bf3daf7553ad05bbea2c6e6089ce25427a39e08cce7703e1182b" exitCode=0 Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.751648 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-24dd9" event={"ID":"477674ec-e729-48e8-801c-497f49e9a6c8","Type":"ContainerDied","Data":"41842c647403bf3daf7553ad05bbea2c6e6089ce25427a39e08cce7703e1182b"} Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.752545 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-24dd9" event={"ID":"477674ec-e729-48e8-801c-497f49e9a6c8","Type":"ContainerStarted","Data":"c6f09ce769715977ec4a90d80f5cb6d1fed5afa316cb89315b5284b75ab781f1"} Jan 31 04:07:21 crc kubenswrapper[4827]: I0131 04:07:21.887636 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dhwln"] Jan 31 04:07:22 crc kubenswrapper[4827]: I0131 04:07:22.122846 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c70cf0b-7e54-41ae-806b-0109d6780de7" path="/var/lib/kubelet/pods/0c70cf0b-7e54-41ae-806b-0109d6780de7/volumes" Jan 31 04:07:22 crc kubenswrapper[4827]: I0131 04:07:22.803110 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dhwln" event={"ID":"455ea1a0-7a10-4f2c-ae49-9a52d3e72771","Type":"ContainerStarted","Data":"4b0cd2aec0cce926d5bb6a28f74d048fa11b20b8a36f620f951358ef2a3a7a1f"} Jan 31 04:07:22 crc kubenswrapper[4827]: I0131 04:07:22.803158 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dhwln" event={"ID":"455ea1a0-7a10-4f2c-ae49-9a52d3e72771","Type":"ContainerStarted","Data":"590dd8276e83e09f7f46b0e967b69d71c49acb8396964183e577a3e50a036de5"} Jan 31 04:07:22 crc kubenswrapper[4827]: I0131 04:07:22.844521 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-24dd9" event={"ID":"477674ec-e729-48e8-801c-497f49e9a6c8","Type":"ContainerStarted","Data":"7dce2b51248bd789b174a40f27675b78519c1b3ad09660a2bbe51d2525bea123"} Jan 31 04:07:22 crc kubenswrapper[4827]: I0131 04:07:22.846066 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566b5b7845-24dd9" Jan 31 04:07:22 crc kubenswrapper[4827]: I0131 04:07:22.854723 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96f29ca8-8584-4f9f-9a5a-f026a8772c07","Type":"ContainerStarted","Data":"4e47616a5c53b4363f494134ec1d8e0dd17e84ef1b44f91c26bb6c00a1319493"} Jan 31 04:07:22 crc kubenswrapper[4827]: I0131 04:07:22.880760 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-dhwln" podStartSLOduration=1.880742041 podStartE2EDuration="1.880742041s" podCreationTimestamp="2026-01-31 04:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:07:22.845727872 +0000 UTC m=+1235.532808321" watchObservedRunningTime="2026-01-31 04:07:22.880742041 +0000 UTC m=+1235.567822490" Jan 31 04:07:22 crc kubenswrapper[4827]: I0131 04:07:22.887979 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566b5b7845-24dd9" podStartSLOduration=2.887961503 podStartE2EDuration="2.887961503s" podCreationTimestamp="2026-01-31 04:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:07:22.878261623 +0000 UTC m=+1235.565342072" watchObservedRunningTime="2026-01-31 04:07:22.887961503 +0000 UTC m=+1235.575041952" Jan 31 04:07:23 crc kubenswrapper[4827]: I0131 04:07:23.557686 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:07:23 crc kubenswrapper[4827]: I0131 04:07:23.573707 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 04:07:24 crc kubenswrapper[4827]: I0131 04:07:24.872185 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96f29ca8-8584-4f9f-9a5a-f026a8772c07","Type":"ContainerStarted","Data":"4e9c9584fea5c75ae3ae32c5c512051d545bcfcd435b2105a7f10266e80b05c6"} Jan 31 04:07:24 crc kubenswrapper[4827]: I0131 04:07:24.875184 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"76102370-40f5-4616-9298-f2e4a0fb668e","Type":"ContainerStarted","Data":"9d3605bea248bfd0d8604f9046a8db4d747e419979fec6d0308451ddfb15dc83"} Jan 31 04:07:24 crc kubenswrapper[4827]: I0131 04:07:24.877989 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"457c9c4f-0d32-47db-8d97-c28694c89936","Type":"ContainerStarted","Data":"edf9dedd39b34ddb4d3ebee6628f87aba8cf14c925158d8f8e7743376266123b"} Jan 31 04:07:24 crc kubenswrapper[4827]: I0131 04:07:24.878018 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"457c9c4f-0d32-47db-8d97-c28694c89936","Type":"ContainerStarted","Data":"286d68671f2beb12b39d23810b06e433eb5b3e3216bef55ec99ec39c6166f0e5"} Jan 31 04:07:24 crc kubenswrapper[4827]: I0131 04:07:24.878120 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="457c9c4f-0d32-47db-8d97-c28694c89936" containerName="nova-metadata-log" containerID="cri-o://286d68671f2beb12b39d23810b06e433eb5b3e3216bef55ec99ec39c6166f0e5" gracePeriod=30 Jan 31 04:07:24 crc kubenswrapper[4827]: I0131 04:07:24.878377 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="457c9c4f-0d32-47db-8d97-c28694c89936" containerName="nova-metadata-metadata" containerID="cri-o://edf9dedd39b34ddb4d3ebee6628f87aba8cf14c925158d8f8e7743376266123b" gracePeriod=30 Jan 31 04:07:24 crc kubenswrapper[4827]: I0131 04:07:24.880156 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"727577e7-9491-4e91-915f-359cd9b7f0be","Type":"ContainerStarted","Data":"41adc229054f196f5f4020b4baaf3e98b5a0c650dc5cfa3d9ea846188f09c5b8"} Jan 31 04:07:24 crc kubenswrapper[4827]: I0131 04:07:24.880257 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="727577e7-9491-4e91-915f-359cd9b7f0be" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://41adc229054f196f5f4020b4baaf3e98b5a0c650dc5cfa3d9ea846188f09c5b8" gracePeriod=30 Jan 31 04:07:24 crc kubenswrapper[4827]: I0131 04:07:24.883033 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"929e59f8-3e14-46cc-bbaf-fda67b6b9d80","Type":"ContainerStarted","Data":"ed3bc2156673f4df424ac5ea4cea74bb4d691cb3bc4a146b0e7d40f6c072e277"} Jan 31 04:07:24 crc kubenswrapper[4827]: I0131 04:07:24.883061 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"929e59f8-3e14-46cc-bbaf-fda67b6b9d80","Type":"ContainerStarted","Data":"0000d2a727f5c592650e72974494ea3f13ed2562b2db4b38735bce7eab44e396"} Jan 31 04:07:24 crc kubenswrapper[4827]: I0131 04:07:24.892051 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.379238637 podStartE2EDuration="5.892031658s" podCreationTimestamp="2026-01-31 04:07:19 +0000 UTC" firstStartedPulling="2026-01-31 04:07:20.735161906 +0000 UTC m=+1233.422242355" lastFinishedPulling="2026-01-31 04:07:24.247954927 +0000 UTC m=+1236.935035376" observedRunningTime="2026-01-31 04:07:24.89113544 +0000 UTC m=+1237.578215889" watchObservedRunningTime="2026-01-31 04:07:24.892031658 +0000 UTC m=+1237.579112107" Jan 31 04:07:24 crc kubenswrapper[4827]: I0131 04:07:24.920134 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.842785836 podStartE2EDuration="5.920113913s" podCreationTimestamp="2026-01-31 04:07:19 +0000 UTC" firstStartedPulling="2026-01-31 04:07:21.170785145 +0000 UTC m=+1233.857865594" lastFinishedPulling="2026-01-31 04:07:24.248113222 +0000 UTC m=+1236.935193671" observedRunningTime="2026-01-31 04:07:24.913586962 +0000 UTC m=+1237.600667431" watchObservedRunningTime="2026-01-31 04:07:24.920113913 +0000 UTC m=+1237.607194362" Jan 31 04:07:24 crc kubenswrapper[4827]: I0131 04:07:24.937170 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.725380679 podStartE2EDuration="5.937153168s" podCreationTimestamp="2026-01-31 04:07:19 +0000 UTC" firstStartedPulling="2026-01-31 04:07:21.023762836 +0000 UTC m=+1233.710843285" lastFinishedPulling="2026-01-31 04:07:24.235535325 +0000 UTC m=+1236.922615774" observedRunningTime="2026-01-31 04:07:24.93136268 +0000 UTC m=+1237.618443139" watchObservedRunningTime="2026-01-31 04:07:24.937153168 +0000 UTC m=+1237.624233617" Jan 31 04:07:24 crc kubenswrapper[4827]: I0131 04:07:24.960274 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.446598463 podStartE2EDuration="5.96025236s" podCreationTimestamp="2026-01-31 04:07:19 +0000 UTC" firstStartedPulling="2026-01-31 04:07:20.736359483 +0000 UTC m=+1233.423439932" lastFinishedPulling="2026-01-31 04:07:24.25001338 +0000 UTC m=+1236.937093829" observedRunningTime="2026-01-31 04:07:24.954233205 +0000 UTC m=+1237.641313684" watchObservedRunningTime="2026-01-31 04:07:24.96025236 +0000 UTC m=+1237.647332809" Jan 31 04:07:25 crc kubenswrapper[4827]: I0131 04:07:25.150934 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 04:07:25 crc kubenswrapper[4827]: I0131 04:07:25.184268 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:25 crc kubenswrapper[4827]: I0131 04:07:25.446188 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 04:07:25 crc kubenswrapper[4827]: I0131 04:07:25.446270 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 04:07:25 crc kubenswrapper[4827]: I0131 04:07:25.891651 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96f29ca8-8584-4f9f-9a5a-f026a8772c07","Type":"ContainerStarted","Data":"c3eefca60adf7fb4628988a3ff3f6e8b20a170306170ac72e34b96ac30bbc393"} Jan 31 04:07:25 crc kubenswrapper[4827]: I0131 04:07:25.893266 4827 generic.go:334] "Generic (PLEG): container finished" podID="457c9c4f-0d32-47db-8d97-c28694c89936" containerID="286d68671f2beb12b39d23810b06e433eb5b3e3216bef55ec99ec39c6166f0e5" exitCode=143 Jan 31 04:07:25 crc kubenswrapper[4827]: I0131 04:07:25.893318 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"457c9c4f-0d32-47db-8d97-c28694c89936","Type":"ContainerDied","Data":"286d68671f2beb12b39d23810b06e433eb5b3e3216bef55ec99ec39c6166f0e5"} Jan 31 04:07:26 crc kubenswrapper[4827]: I0131 04:07:26.025086 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 31 04:07:27 crc kubenswrapper[4827]: I0131 04:07:27.916004 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96f29ca8-8584-4f9f-9a5a-f026a8772c07","Type":"ContainerStarted","Data":"bb0aff0b9841d4e2ea7b0b0d809302a366b0ac59ef67ef97a79097637c4ee7d8"} Jan 31 04:07:27 crc kubenswrapper[4827]: I0131 04:07:27.916390 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 04:07:27 crc kubenswrapper[4827]: I0131 04:07:27.946935 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.053176717 podStartE2EDuration="7.946913234s" podCreationTimestamp="2026-01-31 04:07:20 +0000 UTC" firstStartedPulling="2026-01-31 04:07:21.657726525 +0000 UTC m=+1234.344806964" lastFinishedPulling="2026-01-31 04:07:27.551463012 +0000 UTC m=+1240.238543481" observedRunningTime="2026-01-31 04:07:27.940821976 +0000 UTC m=+1240.627902445" watchObservedRunningTime="2026-01-31 04:07:27.946913234 +0000 UTC m=+1240.633993703" Jan 31 04:07:28 crc kubenswrapper[4827]: I0131 04:07:28.941633 4827 generic.go:334] "Generic (PLEG): container finished" podID="d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb" containerID="019c6ed39d7653912087b16a1b28c8c5ff694ea01e2675fba3ee41184a2d78bc" exitCode=0 Jan 31 04:07:28 crc kubenswrapper[4827]: I0131 04:07:28.941865 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4ggj8" event={"ID":"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb","Type":"ContainerDied","Data":"019c6ed39d7653912087b16a1b28c8c5ff694ea01e2675fba3ee41184a2d78bc"} Jan 31 04:07:29 crc kubenswrapper[4827]: I0131 04:07:29.955965 4827 generic.go:334] "Generic (PLEG): container finished" podID="455ea1a0-7a10-4f2c-ae49-9a52d3e72771" containerID="4b0cd2aec0cce926d5bb6a28f74d048fa11b20b8a36f620f951358ef2a3a7a1f" exitCode=0 Jan 31 04:07:29 crc kubenswrapper[4827]: I0131 04:07:29.956083 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dhwln" event={"ID":"455ea1a0-7a10-4f2c-ae49-9a52d3e72771","Type":"ContainerDied","Data":"4b0cd2aec0cce926d5bb6a28f74d048fa11b20b8a36f620f951358ef2a3a7a1f"} Jan 31 04:07:29 crc kubenswrapper[4827]: I0131 04:07:29.995185 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 04:07:29 crc kubenswrapper[4827]: I0131 04:07:29.995241 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.151710 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.183377 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.395162 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4ggj8" Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.457810 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht5x4\" (UniqueName: \"kubernetes.io/projected/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-kube-api-access-ht5x4\") pod \"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb\" (UID: \"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb\") " Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.457862 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-config-data\") pod \"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb\" (UID: \"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb\") " Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.458047 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-combined-ca-bundle\") pod \"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb\" (UID: \"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb\") " Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.458094 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-scripts\") pod \"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb\" (UID: \"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb\") " Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.464084 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-kube-api-access-ht5x4" (OuterVolumeSpecName: "kube-api-access-ht5x4") pod "d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb" (UID: "d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb"). InnerVolumeSpecName "kube-api-access-ht5x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.489022 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-scripts" (OuterVolumeSpecName: "scripts") pod "d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb" (UID: "d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.493287 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb" (UID: "d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.511471 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-config-data" (OuterVolumeSpecName: "config-data") pod "d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb" (UID: "d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.559828 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566b5b7845-24dd9" Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.561792 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.561815 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.561825 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.561834 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht5x4\" (UniqueName: \"kubernetes.io/projected/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb-kube-api-access-ht5x4\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.649070 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-ht8c2"] Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.649285 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" podUID="5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8" containerName="dnsmasq-dns" containerID="cri-o://06d7363ec90e1783a9f34fabc72fa9b099a6766da31bf2cd8daf11fcfbb67584" gracePeriod=10 Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.966151 4827 generic.go:334] "Generic (PLEG): container finished" podID="5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8" containerID="06d7363ec90e1783a9f34fabc72fa9b099a6766da31bf2cd8daf11fcfbb67584" exitCode=0 Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.966207 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" event={"ID":"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8","Type":"ContainerDied","Data":"06d7363ec90e1783a9f34fabc72fa9b099a6766da31bf2cd8daf11fcfbb67584"} Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.968846 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4ggj8" Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.982393 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4ggj8" event={"ID":"d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb","Type":"ContainerDied","Data":"85d7fda53aef33be457a9a558a099ec0e5388005d482058eed2e551692d7ace4"} Jan 31 04:07:30 crc kubenswrapper[4827]: I0131 04:07:30.982451 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85d7fda53aef33be457a9a558a099ec0e5388005d482058eed2e551692d7ace4" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.017415 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.081305 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="929e59f8-3e14-46cc-bbaf-fda67b6b9d80" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.177:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.081842 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="929e59f8-3e14-46cc-bbaf-fda67b6b9d80" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.177:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.218632 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.219096 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="929e59f8-3e14-46cc-bbaf-fda67b6b9d80" containerName="nova-api-log" containerID="cri-o://0000d2a727f5c592650e72974494ea3f13ed2562b2db4b38735bce7eab44e396" gracePeriod=30 Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.219241 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="929e59f8-3e14-46cc-bbaf-fda67b6b9d80" containerName="nova-api-api" containerID="cri-o://ed3bc2156673f4df424ac5ea4cea74bb4d691cb3bc4a146b0e7d40f6c072e277" gracePeriod=30 Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.414047 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dhwln" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.500540 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9pww\" (UniqueName: \"kubernetes.io/projected/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-kube-api-access-w9pww\") pod \"455ea1a0-7a10-4f2c-ae49-9a52d3e72771\" (UID: \"455ea1a0-7a10-4f2c-ae49-9a52d3e72771\") " Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.500701 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-config-data\") pod \"455ea1a0-7a10-4f2c-ae49-9a52d3e72771\" (UID: \"455ea1a0-7a10-4f2c-ae49-9a52d3e72771\") " Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.500831 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-combined-ca-bundle\") pod \"455ea1a0-7a10-4f2c-ae49-9a52d3e72771\" (UID: \"455ea1a0-7a10-4f2c-ae49-9a52d3e72771\") " Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.500858 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-scripts\") pod \"455ea1a0-7a10-4f2c-ae49-9a52d3e72771\" (UID: \"455ea1a0-7a10-4f2c-ae49-9a52d3e72771\") " Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.522144 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-scripts" (OuterVolumeSpecName: "scripts") pod "455ea1a0-7a10-4f2c-ae49-9a52d3e72771" (UID: "455ea1a0-7a10-4f2c-ae49-9a52d3e72771"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.522218 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-kube-api-access-w9pww" (OuterVolumeSpecName: "kube-api-access-w9pww") pod "455ea1a0-7a10-4f2c-ae49-9a52d3e72771" (UID: "455ea1a0-7a10-4f2c-ae49-9a52d3e72771"). InnerVolumeSpecName "kube-api-access-w9pww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.553798 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "455ea1a0-7a10-4f2c-ae49-9a52d3e72771" (UID: "455ea1a0-7a10-4f2c-ae49-9a52d3e72771"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.560983 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-config-data" (OuterVolumeSpecName: "config-data") pod "455ea1a0-7a10-4f2c-ae49-9a52d3e72771" (UID: "455ea1a0-7a10-4f2c-ae49-9a52d3e72771"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.603833 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.603997 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.604014 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9pww\" (UniqueName: \"kubernetes.io/projected/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-kube-api-access-w9pww\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.604026 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455ea1a0-7a10-4f2c-ae49-9a52d3e72771-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.613341 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.648456 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.705863 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-ovsdbserver-sb\") pod \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\" (UID: \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\") " Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.706047 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-ovsdbserver-nb\") pod \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\" (UID: \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\") " Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.706205 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-dns-svc\") pod \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\" (UID: \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\") " Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.706253 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-config\") pod \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\" (UID: \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\") " Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.706368 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdsqp\" (UniqueName: \"kubernetes.io/projected/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-kube-api-access-pdsqp\") pod \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\" (UID: \"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8\") " Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.714626 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-kube-api-access-pdsqp" (OuterVolumeSpecName: "kube-api-access-pdsqp") pod "5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8" (UID: "5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8"). InnerVolumeSpecName "kube-api-access-pdsqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.749552 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8" (UID: "5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.753538 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-config" (OuterVolumeSpecName: "config") pod "5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8" (UID: "5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.758309 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8" (UID: "5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.770573 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8" (UID: "5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.808694 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.809041 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.809051 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdsqp\" (UniqueName: \"kubernetes.io/projected/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-kube-api-access-pdsqp\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.809062 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.809071 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.979472 4827 generic.go:334] "Generic (PLEG): container finished" podID="929e59f8-3e14-46cc-bbaf-fda67b6b9d80" containerID="0000d2a727f5c592650e72974494ea3f13ed2562b2db4b38735bce7eab44e396" exitCode=143 Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.979555 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"929e59f8-3e14-46cc-bbaf-fda67b6b9d80","Type":"ContainerDied","Data":"0000d2a727f5c592650e72974494ea3f13ed2562b2db4b38735bce7eab44e396"} Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.982908 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" event={"ID":"5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8","Type":"ContainerDied","Data":"3460906c008a42e6c4478cdc19bdcf87ac27655df1d9455ff2dfbda649a13e1a"} Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.982934 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-ht8c2" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.982944 4827 scope.go:117] "RemoveContainer" containerID="06d7363ec90e1783a9f34fabc72fa9b099a6766da31bf2cd8daf11fcfbb67584" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.985699 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dhwln" Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.996743 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dhwln" event={"ID":"455ea1a0-7a10-4f2c-ae49-9a52d3e72771","Type":"ContainerDied","Data":"590dd8276e83e09f7f46b0e967b69d71c49acb8396964183e577a3e50a036de5"} Jan 31 04:07:31 crc kubenswrapper[4827]: I0131 04:07:31.996793 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="590dd8276e83e09f7f46b0e967b69d71c49acb8396964183e577a3e50a036de5" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.007269 4827 scope.go:117] "RemoveContainer" containerID="3dd58151dfee712c62f3ca80baa25f2d49f0a28073b8c21a23d5ff2e895c2244" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.037692 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-ht8c2"] Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.043869 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-ht8c2"] Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.108255 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 04:07:32 crc kubenswrapper[4827]: E0131 04:07:32.108662 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8" containerName="init" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.108682 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8" containerName="init" Jan 31 04:07:32 crc kubenswrapper[4827]: E0131 04:07:32.108698 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb" containerName="nova-manage" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.108708 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb" containerName="nova-manage" Jan 31 04:07:32 crc kubenswrapper[4827]: E0131 04:07:32.108733 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="455ea1a0-7a10-4f2c-ae49-9a52d3e72771" containerName="nova-cell1-conductor-db-sync" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.108741 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="455ea1a0-7a10-4f2c-ae49-9a52d3e72771" containerName="nova-cell1-conductor-db-sync" Jan 31 04:07:32 crc kubenswrapper[4827]: E0131 04:07:32.108753 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8" containerName="dnsmasq-dns" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.108761 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8" containerName="dnsmasq-dns" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.109024 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="455ea1a0-7a10-4f2c-ae49-9a52d3e72771" containerName="nova-cell1-conductor-db-sync" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.109056 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8" containerName="dnsmasq-dns" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.109085 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb" containerName="nova-manage" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.113011 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.119261 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.146767 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8" path="/var/lib/kubelet/pods/5f38aa0f-0fc6-4caf-a17c-d4d1f034e6f8/volumes" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.147369 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.237204 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70377532-a0c3-4b3f-abda-b712b33df5e5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"70377532-a0c3-4b3f-abda-b712b33df5e5\") " pod="openstack/nova-cell1-conductor-0" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.237298 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk8mt\" (UniqueName: \"kubernetes.io/projected/70377532-a0c3-4b3f-abda-b712b33df5e5-kube-api-access-wk8mt\") pod \"nova-cell1-conductor-0\" (UID: \"70377532-a0c3-4b3f-abda-b712b33df5e5\") " pod="openstack/nova-cell1-conductor-0" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.237374 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70377532-a0c3-4b3f-abda-b712b33df5e5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"70377532-a0c3-4b3f-abda-b712b33df5e5\") " pod="openstack/nova-cell1-conductor-0" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.338785 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70377532-a0c3-4b3f-abda-b712b33df5e5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"70377532-a0c3-4b3f-abda-b712b33df5e5\") " pod="openstack/nova-cell1-conductor-0" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.338925 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70377532-a0c3-4b3f-abda-b712b33df5e5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"70377532-a0c3-4b3f-abda-b712b33df5e5\") " pod="openstack/nova-cell1-conductor-0" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.338996 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk8mt\" (UniqueName: \"kubernetes.io/projected/70377532-a0c3-4b3f-abda-b712b33df5e5-kube-api-access-wk8mt\") pod \"nova-cell1-conductor-0\" (UID: \"70377532-a0c3-4b3f-abda-b712b33df5e5\") " pod="openstack/nova-cell1-conductor-0" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.344483 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70377532-a0c3-4b3f-abda-b712b33df5e5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"70377532-a0c3-4b3f-abda-b712b33df5e5\") " pod="openstack/nova-cell1-conductor-0" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.346592 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70377532-a0c3-4b3f-abda-b712b33df5e5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"70377532-a0c3-4b3f-abda-b712b33df5e5\") " pod="openstack/nova-cell1-conductor-0" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.359024 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk8mt\" (UniqueName: \"kubernetes.io/projected/70377532-a0c3-4b3f-abda-b712b33df5e5-kube-api-access-wk8mt\") pod \"nova-cell1-conductor-0\" (UID: \"70377532-a0c3-4b3f-abda-b712b33df5e5\") " pod="openstack/nova-cell1-conductor-0" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.444218 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 31 04:07:32 crc kubenswrapper[4827]: I0131 04:07:32.903691 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 04:07:32 crc kubenswrapper[4827]: W0131 04:07:32.905763 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70377532_a0c3_4b3f_abda_b712b33df5e5.slice/crio-1cf5ae838986901525279466a1a60bbff3ce14a4832b49e48be89d2d4a2c3581 WatchSource:0}: Error finding container 1cf5ae838986901525279466a1a60bbff3ce14a4832b49e48be89d2d4a2c3581: Status 404 returned error can't find the container with id 1cf5ae838986901525279466a1a60bbff3ce14a4832b49e48be89d2d4a2c3581 Jan 31 04:07:33 crc kubenswrapper[4827]: I0131 04:07:33.002460 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"70377532-a0c3-4b3f-abda-b712b33df5e5","Type":"ContainerStarted","Data":"1cf5ae838986901525279466a1a60bbff3ce14a4832b49e48be89d2d4a2c3581"} Jan 31 04:07:33 crc kubenswrapper[4827]: I0131 04:07:33.004325 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="76102370-40f5-4616-9298-f2e4a0fb668e" containerName="nova-scheduler-scheduler" containerID="cri-o://9d3605bea248bfd0d8604f9046a8db4d747e419979fec6d0308451ddfb15dc83" gracePeriod=30 Jan 31 04:07:34 crc kubenswrapper[4827]: I0131 04:07:34.016551 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"70377532-a0c3-4b3f-abda-b712b33df5e5","Type":"ContainerStarted","Data":"871021a24a48031dfa7c6eeda1f0f0db4dbc9a56a6f2d4b6b553aa48b03b8574"} Jan 31 04:07:34 crc kubenswrapper[4827]: I0131 04:07:34.016791 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 31 04:07:34 crc kubenswrapper[4827]: I0131 04:07:34.031539 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.031519262 podStartE2EDuration="2.031519262s" podCreationTimestamp="2026-01-31 04:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:07:34.030321854 +0000 UTC m=+1246.717402303" watchObservedRunningTime="2026-01-31 04:07:34.031519262 +0000 UTC m=+1246.718599711" Jan 31 04:07:35 crc kubenswrapper[4827]: E0131 04:07:35.153217 4827 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9d3605bea248bfd0d8604f9046a8db4d747e419979fec6d0308451ddfb15dc83" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 04:07:35 crc kubenswrapper[4827]: E0131 04:07:35.155858 4827 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9d3605bea248bfd0d8604f9046a8db4d747e419979fec6d0308451ddfb15dc83" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 04:07:35 crc kubenswrapper[4827]: E0131 04:07:35.156821 4827 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9d3605bea248bfd0d8604f9046a8db4d747e419979fec6d0308451ddfb15dc83" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 04:07:35 crc kubenswrapper[4827]: E0131 04:07:35.156866 4827 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="76102370-40f5-4616-9298-f2e4a0fb668e" containerName="nova-scheduler-scheduler" Jan 31 04:07:36 crc kubenswrapper[4827]: I0131 04:07:36.041145 4827 generic.go:334] "Generic (PLEG): container finished" podID="76102370-40f5-4616-9298-f2e4a0fb668e" containerID="9d3605bea248bfd0d8604f9046a8db4d747e419979fec6d0308451ddfb15dc83" exitCode=0 Jan 31 04:07:36 crc kubenswrapper[4827]: I0131 04:07:36.041244 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"76102370-40f5-4616-9298-f2e4a0fb668e","Type":"ContainerDied","Data":"9d3605bea248bfd0d8604f9046a8db4d747e419979fec6d0308451ddfb15dc83"} Jan 31 04:07:36 crc kubenswrapper[4827]: I0131 04:07:36.269725 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 04:07:36 crc kubenswrapper[4827]: I0131 04:07:36.416611 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76102370-40f5-4616-9298-f2e4a0fb668e-config-data\") pod \"76102370-40f5-4616-9298-f2e4a0fb668e\" (UID: \"76102370-40f5-4616-9298-f2e4a0fb668e\") " Jan 31 04:07:36 crc kubenswrapper[4827]: I0131 04:07:36.416671 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxqwh\" (UniqueName: \"kubernetes.io/projected/76102370-40f5-4616-9298-f2e4a0fb668e-kube-api-access-rxqwh\") pod \"76102370-40f5-4616-9298-f2e4a0fb668e\" (UID: \"76102370-40f5-4616-9298-f2e4a0fb668e\") " Jan 31 04:07:36 crc kubenswrapper[4827]: I0131 04:07:36.416711 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76102370-40f5-4616-9298-f2e4a0fb668e-combined-ca-bundle\") pod \"76102370-40f5-4616-9298-f2e4a0fb668e\" (UID: \"76102370-40f5-4616-9298-f2e4a0fb668e\") " Jan 31 04:07:36 crc kubenswrapper[4827]: I0131 04:07:36.423192 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76102370-40f5-4616-9298-f2e4a0fb668e-kube-api-access-rxqwh" (OuterVolumeSpecName: "kube-api-access-rxqwh") pod "76102370-40f5-4616-9298-f2e4a0fb668e" (UID: "76102370-40f5-4616-9298-f2e4a0fb668e"). InnerVolumeSpecName "kube-api-access-rxqwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:36 crc kubenswrapper[4827]: I0131 04:07:36.442524 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76102370-40f5-4616-9298-f2e4a0fb668e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76102370-40f5-4616-9298-f2e4a0fb668e" (UID: "76102370-40f5-4616-9298-f2e4a0fb668e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:36 crc kubenswrapper[4827]: I0131 04:07:36.449424 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76102370-40f5-4616-9298-f2e4a0fb668e-config-data" (OuterVolumeSpecName: "config-data") pod "76102370-40f5-4616-9298-f2e4a0fb668e" (UID: "76102370-40f5-4616-9298-f2e4a0fb668e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:36 crc kubenswrapper[4827]: I0131 04:07:36.519824 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76102370-40f5-4616-9298-f2e4a0fb668e-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:36 crc kubenswrapper[4827]: I0131 04:07:36.519859 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxqwh\" (UniqueName: \"kubernetes.io/projected/76102370-40f5-4616-9298-f2e4a0fb668e-kube-api-access-rxqwh\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:36 crc kubenswrapper[4827]: I0131 04:07:36.519872 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76102370-40f5-4616-9298-f2e4a0fb668e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.053935 4827 generic.go:334] "Generic (PLEG): container finished" podID="929e59f8-3e14-46cc-bbaf-fda67b6b9d80" containerID="ed3bc2156673f4df424ac5ea4cea74bb4d691cb3bc4a146b0e7d40f6c072e277" exitCode=0 Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.054332 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"929e59f8-3e14-46cc-bbaf-fda67b6b9d80","Type":"ContainerDied","Data":"ed3bc2156673f4df424ac5ea4cea74bb4d691cb3bc4a146b0e7d40f6c072e277"} Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.054364 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"929e59f8-3e14-46cc-bbaf-fda67b6b9d80","Type":"ContainerDied","Data":"6e79cc287c6a4b42eb94094ecb14aab7ae275b40e518a9f271829e378bb513e3"} Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.054378 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e79cc287c6a4b42eb94094ecb14aab7ae275b40e518a9f271829e378bb513e3" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.056551 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.056540 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"76102370-40f5-4616-9298-f2e4a0fb668e","Type":"ContainerDied","Data":"508ef637a79d21edeba17aa397dc604a9648e9cc6c15431866be777a0625ffbe"} Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.056636 4827 scope.go:117] "RemoveContainer" containerID="9d3605bea248bfd0d8604f9046a8db4d747e419979fec6d0308451ddfb15dc83" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.138037 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.167362 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.179168 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.196702 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:07:37 crc kubenswrapper[4827]: E0131 04:07:37.197191 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76102370-40f5-4616-9298-f2e4a0fb668e" containerName="nova-scheduler-scheduler" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.197214 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="76102370-40f5-4616-9298-f2e4a0fb668e" containerName="nova-scheduler-scheduler" Jan 31 04:07:37 crc kubenswrapper[4827]: E0131 04:07:37.197233 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="929e59f8-3e14-46cc-bbaf-fda67b6b9d80" containerName="nova-api-log" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.197242 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="929e59f8-3e14-46cc-bbaf-fda67b6b9d80" containerName="nova-api-log" Jan 31 04:07:37 crc kubenswrapper[4827]: E0131 04:07:37.197263 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="929e59f8-3e14-46cc-bbaf-fda67b6b9d80" containerName="nova-api-api" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.197270 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="929e59f8-3e14-46cc-bbaf-fda67b6b9d80" containerName="nova-api-api" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.197470 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="929e59f8-3e14-46cc-bbaf-fda67b6b9d80" containerName="nova-api-api" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.197518 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="929e59f8-3e14-46cc-bbaf-fda67b6b9d80" containerName="nova-api-log" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.197548 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="76102370-40f5-4616-9298-f2e4a0fb668e" containerName="nova-scheduler-scheduler" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.198387 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.202141 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.208385 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.230650 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-config-data\") pod \"929e59f8-3e14-46cc-bbaf-fda67b6b9d80\" (UID: \"929e59f8-3e14-46cc-bbaf-fda67b6b9d80\") " Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.230754 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-combined-ca-bundle\") pod \"929e59f8-3e14-46cc-bbaf-fda67b6b9d80\" (UID: \"929e59f8-3e14-46cc-bbaf-fda67b6b9d80\") " Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.230896 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgwlf\" (UniqueName: \"kubernetes.io/projected/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-kube-api-access-zgwlf\") pod \"929e59f8-3e14-46cc-bbaf-fda67b6b9d80\" (UID: \"929e59f8-3e14-46cc-bbaf-fda67b6b9d80\") " Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.230977 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-logs\") pod \"929e59f8-3e14-46cc-bbaf-fda67b6b9d80\" (UID: \"929e59f8-3e14-46cc-bbaf-fda67b6b9d80\") " Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.234413 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-logs" (OuterVolumeSpecName: "logs") pod "929e59f8-3e14-46cc-bbaf-fda67b6b9d80" (UID: "929e59f8-3e14-46cc-bbaf-fda67b6b9d80"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.265100 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-kube-api-access-zgwlf" (OuterVolumeSpecName: "kube-api-access-zgwlf") pod "929e59f8-3e14-46cc-bbaf-fda67b6b9d80" (UID: "929e59f8-3e14-46cc-bbaf-fda67b6b9d80"). InnerVolumeSpecName "kube-api-access-zgwlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.332376 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-config-data" (OuterVolumeSpecName: "config-data") pod "929e59f8-3e14-46cc-bbaf-fda67b6b9d80" (UID: "929e59f8-3e14-46cc-bbaf-fda67b6b9d80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.333375 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e012783-53bb-44bb-a946-c664a0db3587-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4e012783-53bb-44bb-a946-c664a0db3587\") " pod="openstack/nova-scheduler-0" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.333482 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e012783-53bb-44bb-a946-c664a0db3587-config-data\") pod \"nova-scheduler-0\" (UID: \"4e012783-53bb-44bb-a946-c664a0db3587\") " pod="openstack/nova-scheduler-0" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.333518 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6ths\" (UniqueName: \"kubernetes.io/projected/4e012783-53bb-44bb-a946-c664a0db3587-kube-api-access-z6ths\") pod \"nova-scheduler-0\" (UID: \"4e012783-53bb-44bb-a946-c664a0db3587\") " pod="openstack/nova-scheduler-0" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.333600 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgwlf\" (UniqueName: \"kubernetes.io/projected/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-kube-api-access-zgwlf\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.333619 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.333630 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.348099 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "929e59f8-3e14-46cc-bbaf-fda67b6b9d80" (UID: "929e59f8-3e14-46cc-bbaf-fda67b6b9d80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.434829 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e012783-53bb-44bb-a946-c664a0db3587-config-data\") pod \"nova-scheduler-0\" (UID: \"4e012783-53bb-44bb-a946-c664a0db3587\") " pod="openstack/nova-scheduler-0" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.434901 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6ths\" (UniqueName: \"kubernetes.io/projected/4e012783-53bb-44bb-a946-c664a0db3587-kube-api-access-z6ths\") pod \"nova-scheduler-0\" (UID: \"4e012783-53bb-44bb-a946-c664a0db3587\") " pod="openstack/nova-scheduler-0" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.434999 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e012783-53bb-44bb-a946-c664a0db3587-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4e012783-53bb-44bb-a946-c664a0db3587\") " pod="openstack/nova-scheduler-0" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.435118 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/929e59f8-3e14-46cc-bbaf-fda67b6b9d80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.441552 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e012783-53bb-44bb-a946-c664a0db3587-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4e012783-53bb-44bb-a946-c664a0db3587\") " pod="openstack/nova-scheduler-0" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.441637 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e012783-53bb-44bb-a946-c664a0db3587-config-data\") pod \"nova-scheduler-0\" (UID: \"4e012783-53bb-44bb-a946-c664a0db3587\") " pod="openstack/nova-scheduler-0" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.451207 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6ths\" (UniqueName: \"kubernetes.io/projected/4e012783-53bb-44bb-a946-c664a0db3587-kube-api-access-z6ths\") pod \"nova-scheduler-0\" (UID: \"4e012783-53bb-44bb-a946-c664a0db3587\") " pod="openstack/nova-scheduler-0" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.514744 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 04:07:37 crc kubenswrapper[4827]: I0131 04:07:37.947705 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:07:37 crc kubenswrapper[4827]: W0131 04:07:37.957320 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e012783_53bb_44bb_a946_c664a0db3587.slice/crio-73c0f19f6e4d3b41a33508f7f819bb7efc5ec78fcad5d31e76ce968764ba05c5 WatchSource:0}: Error finding container 73c0f19f6e4d3b41a33508f7f819bb7efc5ec78fcad5d31e76ce968764ba05c5: Status 404 returned error can't find the container with id 73c0f19f6e4d3b41a33508f7f819bb7efc5ec78fcad5d31e76ce968764ba05c5 Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.073082 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e012783-53bb-44bb-a946-c664a0db3587","Type":"ContainerStarted","Data":"73c0f19f6e4d3b41a33508f7f819bb7efc5ec78fcad5d31e76ce968764ba05c5"} Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.073122 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.144305 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76102370-40f5-4616-9298-f2e4a0fb668e" path="/var/lib/kubelet/pods/76102370-40f5-4616-9298-f2e4a0fb668e/volumes" Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.147608 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.153090 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.166859 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.168680 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.174641 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.175773 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.248746 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-logs\") pod \"nova-api-0\" (UID: \"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4\") " pod="openstack/nova-api-0" Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.249192 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9g4b\" (UniqueName: \"kubernetes.io/projected/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-kube-api-access-n9g4b\") pod \"nova-api-0\" (UID: \"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4\") " pod="openstack/nova-api-0" Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.249236 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4\") " pod="openstack/nova-api-0" Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.249274 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-config-data\") pod \"nova-api-0\" (UID: \"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4\") " pod="openstack/nova-api-0" Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.351234 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-logs\") pod \"nova-api-0\" (UID: \"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4\") " pod="openstack/nova-api-0" Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.351318 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9g4b\" (UniqueName: \"kubernetes.io/projected/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-kube-api-access-n9g4b\") pod \"nova-api-0\" (UID: \"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4\") " pod="openstack/nova-api-0" Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.351356 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4\") " pod="openstack/nova-api-0" Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.351400 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-config-data\") pod \"nova-api-0\" (UID: \"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4\") " pod="openstack/nova-api-0" Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.352757 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-logs\") pod \"nova-api-0\" (UID: \"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4\") " pod="openstack/nova-api-0" Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.364109 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4\") " pod="openstack/nova-api-0" Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.364608 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-config-data\") pod \"nova-api-0\" (UID: \"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4\") " pod="openstack/nova-api-0" Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.366047 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9g4b\" (UniqueName: \"kubernetes.io/projected/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-kube-api-access-n9g4b\") pod \"nova-api-0\" (UID: \"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4\") " pod="openstack/nova-api-0" Jan 31 04:07:38 crc kubenswrapper[4827]: I0131 04:07:38.492018 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:07:39 crc kubenswrapper[4827]: I0131 04:07:39.026392 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:07:39 crc kubenswrapper[4827]: W0131 04:07:39.034423 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd85adf9f_9e3f_4432_9190_7d5a8d8b2fc4.slice/crio-64c69097c2254109550ee48ab02482fa8b4a6cb304deedb13e5ff3a6235fe854 WatchSource:0}: Error finding container 64c69097c2254109550ee48ab02482fa8b4a6cb304deedb13e5ff3a6235fe854: Status 404 returned error can't find the container with id 64c69097c2254109550ee48ab02482fa8b4a6cb304deedb13e5ff3a6235fe854 Jan 31 04:07:39 crc kubenswrapper[4827]: I0131 04:07:39.087541 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4","Type":"ContainerStarted","Data":"64c69097c2254109550ee48ab02482fa8b4a6cb304deedb13e5ff3a6235fe854"} Jan 31 04:07:39 crc kubenswrapper[4827]: I0131 04:07:39.089632 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e012783-53bb-44bb-a946-c664a0db3587","Type":"ContainerStarted","Data":"bcca0921bbffa8547e9dc1ecbf7761aab6925b0c44e5fb8f8b45cfb5db033d4c"} Jan 31 04:07:39 crc kubenswrapper[4827]: I0131 04:07:39.108693 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.108675054 podStartE2EDuration="2.108675054s" podCreationTimestamp="2026-01-31 04:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:07:39.104366161 +0000 UTC m=+1251.791446630" watchObservedRunningTime="2026-01-31 04:07:39.108675054 +0000 UTC m=+1251.795755513" Jan 31 04:07:40 crc kubenswrapper[4827]: I0131 04:07:40.129145 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="929e59f8-3e14-46cc-bbaf-fda67b6b9d80" path="/var/lib/kubelet/pods/929e59f8-3e14-46cc-bbaf-fda67b6b9d80/volumes" Jan 31 04:07:40 crc kubenswrapper[4827]: I0131 04:07:40.131443 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4","Type":"ContainerStarted","Data":"e55459fef9da43d981e1ba3636d0774bd3fe1c2186f61423ec81b11d4e7c901c"} Jan 31 04:07:40 crc kubenswrapper[4827]: I0131 04:07:40.131524 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4","Type":"ContainerStarted","Data":"b30a539a15169a32f782b45c8ae8a22d09b067d724a3479f046da8643a130484"} Jan 31 04:07:40 crc kubenswrapper[4827]: I0131 04:07:40.165270 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.165256622 podStartE2EDuration="2.165256622s" podCreationTimestamp="2026-01-31 04:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:07:40.164778257 +0000 UTC m=+1252.851858706" watchObservedRunningTime="2026-01-31 04:07:40.165256622 +0000 UTC m=+1252.852337071" Jan 31 04:07:42 crc kubenswrapper[4827]: I0131 04:07:42.470295 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 31 04:07:42 crc kubenswrapper[4827]: I0131 04:07:42.515736 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 04:07:47 crc kubenswrapper[4827]: I0131 04:07:47.515432 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 04:07:47 crc kubenswrapper[4827]: I0131 04:07:47.539509 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 04:07:48 crc kubenswrapper[4827]: I0131 04:07:48.223431 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 04:07:48 crc kubenswrapper[4827]: I0131 04:07:48.493139 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 04:07:48 crc kubenswrapper[4827]: I0131 04:07:48.493231 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 04:07:49 crc kubenswrapper[4827]: I0131 04:07:49.535255 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:07:49 crc kubenswrapper[4827]: I0131 04:07:49.576167 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:07:51 crc kubenswrapper[4827]: I0131 04:07:51.213248 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 04:07:55 crc kubenswrapper[4827]: E0131 04:07:55.196855 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod727577e7_9491_4e91_915f_359cd9b7f0be.slice/crio-conmon-41adc229054f196f5f4020b4baaf3e98b5a0c650dc5cfa3d9ea846188f09c5b8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod727577e7_9491_4e91_915f_359cd9b7f0be.slice/crio-41adc229054f196f5f4020b4baaf3e98b5a0c650dc5cfa3d9ea846188f09c5b8.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.258212 4827 generic.go:334] "Generic (PLEG): container finished" podID="457c9c4f-0d32-47db-8d97-c28694c89936" containerID="edf9dedd39b34ddb4d3ebee6628f87aba8cf14c925158d8f8e7743376266123b" exitCode=137 Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.258261 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"457c9c4f-0d32-47db-8d97-c28694c89936","Type":"ContainerDied","Data":"edf9dedd39b34ddb4d3ebee6628f87aba8cf14c925158d8f8e7743376266123b"} Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.261172 4827 generic.go:334] "Generic (PLEG): container finished" podID="727577e7-9491-4e91-915f-359cd9b7f0be" containerID="41adc229054f196f5f4020b4baaf3e98b5a0c650dc5cfa3d9ea846188f09c5b8" exitCode=137 Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.261225 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"727577e7-9491-4e91-915f-359cd9b7f0be","Type":"ContainerDied","Data":"41adc229054f196f5f4020b4baaf3e98b5a0c650dc5cfa3d9ea846188f09c5b8"} Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.362959 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.368077 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.510957 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-286f2\" (UniqueName: \"kubernetes.io/projected/727577e7-9491-4e91-915f-359cd9b7f0be-kube-api-access-286f2\") pod \"727577e7-9491-4e91-915f-359cd9b7f0be\" (UID: \"727577e7-9491-4e91-915f-359cd9b7f0be\") " Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.511496 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727577e7-9491-4e91-915f-359cd9b7f0be-combined-ca-bundle\") pod \"727577e7-9491-4e91-915f-359cd9b7f0be\" (UID: \"727577e7-9491-4e91-915f-359cd9b7f0be\") " Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.511628 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j292l\" (UniqueName: \"kubernetes.io/projected/457c9c4f-0d32-47db-8d97-c28694c89936-kube-api-access-j292l\") pod \"457c9c4f-0d32-47db-8d97-c28694c89936\" (UID: \"457c9c4f-0d32-47db-8d97-c28694c89936\") " Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.511746 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457c9c4f-0d32-47db-8d97-c28694c89936-combined-ca-bundle\") pod \"457c9c4f-0d32-47db-8d97-c28694c89936\" (UID: \"457c9c4f-0d32-47db-8d97-c28694c89936\") " Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.511950 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/457c9c4f-0d32-47db-8d97-c28694c89936-logs\") pod \"457c9c4f-0d32-47db-8d97-c28694c89936\" (UID: \"457c9c4f-0d32-47db-8d97-c28694c89936\") " Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.512209 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/727577e7-9491-4e91-915f-359cd9b7f0be-config-data\") pod \"727577e7-9491-4e91-915f-359cd9b7f0be\" (UID: \"727577e7-9491-4e91-915f-359cd9b7f0be\") " Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.513386 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/457c9c4f-0d32-47db-8d97-c28694c89936-config-data\") pod \"457c9c4f-0d32-47db-8d97-c28694c89936\" (UID: \"457c9c4f-0d32-47db-8d97-c28694c89936\") " Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.516264 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/457c9c4f-0d32-47db-8d97-c28694c89936-logs" (OuterVolumeSpecName: "logs") pod "457c9c4f-0d32-47db-8d97-c28694c89936" (UID: "457c9c4f-0d32-47db-8d97-c28694c89936"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.516526 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/727577e7-9491-4e91-915f-359cd9b7f0be-kube-api-access-286f2" (OuterVolumeSpecName: "kube-api-access-286f2") pod "727577e7-9491-4e91-915f-359cd9b7f0be" (UID: "727577e7-9491-4e91-915f-359cd9b7f0be"). InnerVolumeSpecName "kube-api-access-286f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.528665 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/457c9c4f-0d32-47db-8d97-c28694c89936-kube-api-access-j292l" (OuterVolumeSpecName: "kube-api-access-j292l") pod "457c9c4f-0d32-47db-8d97-c28694c89936" (UID: "457c9c4f-0d32-47db-8d97-c28694c89936"). InnerVolumeSpecName "kube-api-access-j292l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.547723 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457c9c4f-0d32-47db-8d97-c28694c89936-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "457c9c4f-0d32-47db-8d97-c28694c89936" (UID: "457c9c4f-0d32-47db-8d97-c28694c89936"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.548277 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457c9c4f-0d32-47db-8d97-c28694c89936-config-data" (OuterVolumeSpecName: "config-data") pod "457c9c4f-0d32-47db-8d97-c28694c89936" (UID: "457c9c4f-0d32-47db-8d97-c28694c89936"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.548835 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/727577e7-9491-4e91-915f-359cd9b7f0be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "727577e7-9491-4e91-915f-359cd9b7f0be" (UID: "727577e7-9491-4e91-915f-359cd9b7f0be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.556648 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/727577e7-9491-4e91-915f-359cd9b7f0be-config-data" (OuterVolumeSpecName: "config-data") pod "727577e7-9491-4e91-915f-359cd9b7f0be" (UID: "727577e7-9491-4e91-915f-359cd9b7f0be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.616075 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/727577e7-9491-4e91-915f-359cd9b7f0be-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.616113 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/457c9c4f-0d32-47db-8d97-c28694c89936-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.616127 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-286f2\" (UniqueName: \"kubernetes.io/projected/727577e7-9491-4e91-915f-359cd9b7f0be-kube-api-access-286f2\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.616145 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727577e7-9491-4e91-915f-359cd9b7f0be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.616164 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j292l\" (UniqueName: \"kubernetes.io/projected/457c9c4f-0d32-47db-8d97-c28694c89936-kube-api-access-j292l\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.616181 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457c9c4f-0d32-47db-8d97-c28694c89936-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:55 crc kubenswrapper[4827]: I0131 04:07:55.616235 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/457c9c4f-0d32-47db-8d97-c28694c89936-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.273947 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"457c9c4f-0d32-47db-8d97-c28694c89936","Type":"ContainerDied","Data":"293aca025c72fb756b03daa8d5b8f36c018191d59fe2edf5977e8de631914647"} Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.274244 4827 scope.go:117] "RemoveContainer" containerID="edf9dedd39b34ddb4d3ebee6628f87aba8cf14c925158d8f8e7743376266123b" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.274004 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.280650 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"727577e7-9491-4e91-915f-359cd9b7f0be","Type":"ContainerDied","Data":"8583dc7b3b3b1f213d64ae2796332557793015ffbae783e8b95da9dd791f0638"} Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.280736 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.323377 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.330568 4827 scope.go:117] "RemoveContainer" containerID="286d68671f2beb12b39d23810b06e433eb5b3e3216bef55ec99ec39c6166f0e5" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.347193 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.363835 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.365468 4827 scope.go:117] "RemoveContainer" containerID="41adc229054f196f5f4020b4baaf3e98b5a0c650dc5cfa3d9ea846188f09c5b8" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.378049 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.387145 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 04:07:56 crc kubenswrapper[4827]: E0131 04:07:56.387520 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="457c9c4f-0d32-47db-8d97-c28694c89936" containerName="nova-metadata-metadata" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.387536 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="457c9c4f-0d32-47db-8d97-c28694c89936" containerName="nova-metadata-metadata" Jan 31 04:07:56 crc kubenswrapper[4827]: E0131 04:07:56.387560 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="457c9c4f-0d32-47db-8d97-c28694c89936" containerName="nova-metadata-log" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.387567 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="457c9c4f-0d32-47db-8d97-c28694c89936" containerName="nova-metadata-log" Jan 31 04:07:56 crc kubenswrapper[4827]: E0131 04:07:56.387597 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="727577e7-9491-4e91-915f-359cd9b7f0be" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.387603 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="727577e7-9491-4e91-915f-359cd9b7f0be" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.387755 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="457c9c4f-0d32-47db-8d97-c28694c89936" containerName="nova-metadata-log" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.387783 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="727577e7-9491-4e91-915f-359cd9b7f0be" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.387801 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="457c9c4f-0d32-47db-8d97-c28694c89936" containerName="nova-metadata-metadata" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.388381 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.390713 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.392117 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.394506 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.395182 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.402827 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.404290 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.408839 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.412239 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.413401 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.532729 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.533336 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.533482 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc4mq\" (UniqueName: \"kubernetes.io/projected/61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14-kube-api-access-vc4mq\") pod \"nova-cell1-novncproxy-0\" (UID: \"61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.533604 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e93017-ead3-407d-a5fe-e5459c46e6fb-config-data\") pod \"nova-metadata-0\" (UID: \"71e93017-ead3-407d-a5fe-e5459c46e6fb\") " pod="openstack/nova-metadata-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.533673 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e93017-ead3-407d-a5fe-e5459c46e6fb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"71e93017-ead3-407d-a5fe-e5459c46e6fb\") " pod="openstack/nova-metadata-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.533709 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/71e93017-ead3-407d-a5fe-e5459c46e6fb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"71e93017-ead3-407d-a5fe-e5459c46e6fb\") " pod="openstack/nova-metadata-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.533789 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.533831 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71e93017-ead3-407d-a5fe-e5459c46e6fb-logs\") pod \"nova-metadata-0\" (UID: \"71e93017-ead3-407d-a5fe-e5459c46e6fb\") " pod="openstack/nova-metadata-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.533859 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.533952 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2vsw\" (UniqueName: \"kubernetes.io/projected/71e93017-ead3-407d-a5fe-e5459c46e6fb-kube-api-access-n2vsw\") pod \"nova-metadata-0\" (UID: \"71e93017-ead3-407d-a5fe-e5459c46e6fb\") " pod="openstack/nova-metadata-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.635102 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.635198 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.635229 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc4mq\" (UniqueName: \"kubernetes.io/projected/61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14-kube-api-access-vc4mq\") pod \"nova-cell1-novncproxy-0\" (UID: \"61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.635255 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e93017-ead3-407d-a5fe-e5459c46e6fb-config-data\") pod \"nova-metadata-0\" (UID: \"71e93017-ead3-407d-a5fe-e5459c46e6fb\") " pod="openstack/nova-metadata-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.635286 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e93017-ead3-407d-a5fe-e5459c46e6fb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"71e93017-ead3-407d-a5fe-e5459c46e6fb\") " pod="openstack/nova-metadata-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.635312 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/71e93017-ead3-407d-a5fe-e5459c46e6fb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"71e93017-ead3-407d-a5fe-e5459c46e6fb\") " pod="openstack/nova-metadata-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.635345 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.635363 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71e93017-ead3-407d-a5fe-e5459c46e6fb-logs\") pod \"nova-metadata-0\" (UID: \"71e93017-ead3-407d-a5fe-e5459c46e6fb\") " pod="openstack/nova-metadata-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.635383 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.635418 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2vsw\" (UniqueName: \"kubernetes.io/projected/71e93017-ead3-407d-a5fe-e5459c46e6fb-kube-api-access-n2vsw\") pod \"nova-metadata-0\" (UID: \"71e93017-ead3-407d-a5fe-e5459c46e6fb\") " pod="openstack/nova-metadata-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.636329 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71e93017-ead3-407d-a5fe-e5459c46e6fb-logs\") pod \"nova-metadata-0\" (UID: \"71e93017-ead3-407d-a5fe-e5459c46e6fb\") " pod="openstack/nova-metadata-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.641358 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.642548 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.642994 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e93017-ead3-407d-a5fe-e5459c46e6fb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"71e93017-ead3-407d-a5fe-e5459c46e6fb\") " pod="openstack/nova-metadata-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.643257 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/71e93017-ead3-407d-a5fe-e5459c46e6fb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"71e93017-ead3-407d-a5fe-e5459c46e6fb\") " pod="openstack/nova-metadata-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.648767 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.649252 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.649371 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e93017-ead3-407d-a5fe-e5459c46e6fb-config-data\") pod \"nova-metadata-0\" (UID: \"71e93017-ead3-407d-a5fe-e5459c46e6fb\") " pod="openstack/nova-metadata-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.652031 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc4mq\" (UniqueName: \"kubernetes.io/projected/61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14-kube-api-access-vc4mq\") pod \"nova-cell1-novncproxy-0\" (UID: \"61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.652194 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2vsw\" (UniqueName: \"kubernetes.io/projected/71e93017-ead3-407d-a5fe-e5459c46e6fb-kube-api-access-n2vsw\") pod \"nova-metadata-0\" (UID: \"71e93017-ead3-407d-a5fe-e5459c46e6fb\") " pod="openstack/nova-metadata-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.705006 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:07:56 crc kubenswrapper[4827]: I0131 04:07:56.722286 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:07:57 crc kubenswrapper[4827]: I0131 04:07:57.162004 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 04:07:57 crc kubenswrapper[4827]: W0131 04:07:57.166384 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61a51a6c_ddc6_4da3_8fcf_4ddb8e50fc14.slice/crio-9c8d332dd83050a1f8717653c1199df03c6f679bce65f743069861ef3ebc1740 WatchSource:0}: Error finding container 9c8d332dd83050a1f8717653c1199df03c6f679bce65f743069861ef3ebc1740: Status 404 returned error can't find the container with id 9c8d332dd83050a1f8717653c1199df03c6f679bce65f743069861ef3ebc1740 Jan 31 04:07:57 crc kubenswrapper[4827]: I0131 04:07:57.228517 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:07:57 crc kubenswrapper[4827]: W0131 04:07:57.240785 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71e93017_ead3_407d_a5fe_e5459c46e6fb.slice/crio-b8f48c6c6df2c35bd94f0078b7285888f7899bb56d7037518a486e3b57d84eff WatchSource:0}: Error finding container b8f48c6c6df2c35bd94f0078b7285888f7899bb56d7037518a486e3b57d84eff: Status 404 returned error can't find the container with id b8f48c6c6df2c35bd94f0078b7285888f7899bb56d7037518a486e3b57d84eff Jan 31 04:07:57 crc kubenswrapper[4827]: I0131 04:07:57.298570 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"71e93017-ead3-407d-a5fe-e5459c46e6fb","Type":"ContainerStarted","Data":"b8f48c6c6df2c35bd94f0078b7285888f7899bb56d7037518a486e3b57d84eff"} Jan 31 04:07:57 crc kubenswrapper[4827]: I0131 04:07:57.303843 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14","Type":"ContainerStarted","Data":"9c8d332dd83050a1f8717653c1199df03c6f679bce65f743069861ef3ebc1740"} Jan 31 04:07:58 crc kubenswrapper[4827]: I0131 04:07:58.138495 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="457c9c4f-0d32-47db-8d97-c28694c89936" path="/var/lib/kubelet/pods/457c9c4f-0d32-47db-8d97-c28694c89936/volumes" Jan 31 04:07:58 crc kubenswrapper[4827]: I0131 04:07:58.139584 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="727577e7-9491-4e91-915f-359cd9b7f0be" path="/var/lib/kubelet/pods/727577e7-9491-4e91-915f-359cd9b7f0be/volumes" Jan 31 04:07:58 crc kubenswrapper[4827]: I0131 04:07:58.326845 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"71e93017-ead3-407d-a5fe-e5459c46e6fb","Type":"ContainerStarted","Data":"3f090c24e952123933a71c45b5b0bb33654e9b9faff2186dce090d2bb13c98f9"} Jan 31 04:07:58 crc kubenswrapper[4827]: I0131 04:07:58.327565 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"71e93017-ead3-407d-a5fe-e5459c46e6fb","Type":"ContainerStarted","Data":"a34e071a98c5c5f764478f28a28a1ac75c96015ff85c2a0a75b859da77df50fb"} Jan 31 04:07:58 crc kubenswrapper[4827]: I0131 04:07:58.329200 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14","Type":"ContainerStarted","Data":"f08079e2ee932e19d70f4300a2886b3d63f2f6aeecf2a16565536c08fadd697a"} Jan 31 04:07:58 crc kubenswrapper[4827]: I0131 04:07:58.352663 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.3526404149999998 podStartE2EDuration="2.352640415s" podCreationTimestamp="2026-01-31 04:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:07:58.345467114 +0000 UTC m=+1271.032547563" watchObservedRunningTime="2026-01-31 04:07:58.352640415 +0000 UTC m=+1271.039720864" Jan 31 04:07:58 crc kubenswrapper[4827]: I0131 04:07:58.367152 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.367128101 podStartE2EDuration="2.367128101s" podCreationTimestamp="2026-01-31 04:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:07:58.363265142 +0000 UTC m=+1271.050345611" watchObservedRunningTime="2026-01-31 04:07:58.367128101 +0000 UTC m=+1271.054208550" Jan 31 04:07:58 crc kubenswrapper[4827]: I0131 04:07:58.496960 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 04:07:58 crc kubenswrapper[4827]: I0131 04:07:58.497729 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 04:07:58 crc kubenswrapper[4827]: I0131 04:07:58.501474 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 04:07:58 crc kubenswrapper[4827]: I0131 04:07:58.509711 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 04:07:59 crc kubenswrapper[4827]: I0131 04:07:59.338301 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 04:07:59 crc kubenswrapper[4827]: I0131 04:07:59.341800 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 04:07:59 crc kubenswrapper[4827]: I0131 04:07:59.503423 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-h8gmp"] Jan 31 04:07:59 crc kubenswrapper[4827]: I0131 04:07:59.509436 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" Jan 31 04:07:59 crc kubenswrapper[4827]: I0131 04:07:59.532835 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-h8gmp"] Jan 31 04:07:59 crc kubenswrapper[4827]: I0131 04:07:59.597551 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9smz\" (UniqueName: \"kubernetes.io/projected/ab1251de-0ef5-48f1-b9db-0a68965651cd-kube-api-access-b9smz\") pod \"dnsmasq-dns-5b856c5697-h8gmp\" (UID: \"ab1251de-0ef5-48f1-b9db-0a68965651cd\") " pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" Jan 31 04:07:59 crc kubenswrapper[4827]: I0131 04:07:59.597615 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-h8gmp\" (UID: \"ab1251de-0ef5-48f1-b9db-0a68965651cd\") " pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" Jan 31 04:07:59 crc kubenswrapper[4827]: I0131 04:07:59.597663 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-h8gmp\" (UID: \"ab1251de-0ef5-48f1-b9db-0a68965651cd\") " pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" Jan 31 04:07:59 crc kubenswrapper[4827]: I0131 04:07:59.597705 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-config\") pod \"dnsmasq-dns-5b856c5697-h8gmp\" (UID: \"ab1251de-0ef5-48f1-b9db-0a68965651cd\") " pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" Jan 31 04:07:59 crc kubenswrapper[4827]: I0131 04:07:59.597750 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-dns-svc\") pod \"dnsmasq-dns-5b856c5697-h8gmp\" (UID: \"ab1251de-0ef5-48f1-b9db-0a68965651cd\") " pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" Jan 31 04:07:59 crc kubenswrapper[4827]: I0131 04:07:59.698933 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-h8gmp\" (UID: \"ab1251de-0ef5-48f1-b9db-0a68965651cd\") " pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" Jan 31 04:07:59 crc kubenswrapper[4827]: I0131 04:07:59.698996 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-config\") pod \"dnsmasq-dns-5b856c5697-h8gmp\" (UID: \"ab1251de-0ef5-48f1-b9db-0a68965651cd\") " pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" Jan 31 04:07:59 crc kubenswrapper[4827]: I0131 04:07:59.699035 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-dns-svc\") pod \"dnsmasq-dns-5b856c5697-h8gmp\" (UID: \"ab1251de-0ef5-48f1-b9db-0a68965651cd\") " pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" Jan 31 04:07:59 crc kubenswrapper[4827]: I0131 04:07:59.699095 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9smz\" (UniqueName: \"kubernetes.io/projected/ab1251de-0ef5-48f1-b9db-0a68965651cd-kube-api-access-b9smz\") pod \"dnsmasq-dns-5b856c5697-h8gmp\" (UID: \"ab1251de-0ef5-48f1-b9db-0a68965651cd\") " pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" Jan 31 04:07:59 crc kubenswrapper[4827]: I0131 04:07:59.699128 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-h8gmp\" (UID: \"ab1251de-0ef5-48f1-b9db-0a68965651cd\") " pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" Jan 31 04:07:59 crc kubenswrapper[4827]: I0131 04:07:59.699835 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-h8gmp\" (UID: \"ab1251de-0ef5-48f1-b9db-0a68965651cd\") " pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" Jan 31 04:07:59 crc kubenswrapper[4827]: I0131 04:07:59.700386 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-h8gmp\" (UID: \"ab1251de-0ef5-48f1-b9db-0a68965651cd\") " pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" Jan 31 04:07:59 crc kubenswrapper[4827]: I0131 04:07:59.700913 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-config\") pod \"dnsmasq-dns-5b856c5697-h8gmp\" (UID: \"ab1251de-0ef5-48f1-b9db-0a68965651cd\") " pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" Jan 31 04:07:59 crc kubenswrapper[4827]: I0131 04:07:59.701408 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-dns-svc\") pod \"dnsmasq-dns-5b856c5697-h8gmp\" (UID: \"ab1251de-0ef5-48f1-b9db-0a68965651cd\") " pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" Jan 31 04:07:59 crc kubenswrapper[4827]: I0131 04:07:59.719915 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9smz\" (UniqueName: \"kubernetes.io/projected/ab1251de-0ef5-48f1-b9db-0a68965651cd-kube-api-access-b9smz\") pod \"dnsmasq-dns-5b856c5697-h8gmp\" (UID: \"ab1251de-0ef5-48f1-b9db-0a68965651cd\") " pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" Jan 31 04:07:59 crc kubenswrapper[4827]: I0131 04:07:59.859573 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" Jan 31 04:08:00 crc kubenswrapper[4827]: I0131 04:08:00.368839 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-h8gmp"] Jan 31 04:08:00 crc kubenswrapper[4827]: W0131 04:08:00.372057 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab1251de_0ef5_48f1_b9db_0a68965651cd.slice/crio-b31f352abdbe939f281df6bfabadfc18e4983d94758316aef0884ceb7652169f WatchSource:0}: Error finding container b31f352abdbe939f281df6bfabadfc18e4983d94758316aef0884ceb7652169f: Status 404 returned error can't find the container with id b31f352abdbe939f281df6bfabadfc18e4983d94758316aef0884ceb7652169f Jan 31 04:08:01 crc kubenswrapper[4827]: I0131 04:08:01.355417 4827 generic.go:334] "Generic (PLEG): container finished" podID="ab1251de-0ef5-48f1-b9db-0a68965651cd" containerID="e9695ee2732e95e9c05e12f45cd96a45fd620fb76456e30e6211294ed3348d01" exitCode=0 Jan 31 04:08:01 crc kubenswrapper[4827]: I0131 04:08:01.355508 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" event={"ID":"ab1251de-0ef5-48f1-b9db-0a68965651cd","Type":"ContainerDied","Data":"e9695ee2732e95e9c05e12f45cd96a45fd620fb76456e30e6211294ed3348d01"} Jan 31 04:08:01 crc kubenswrapper[4827]: I0131 04:08:01.355951 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" event={"ID":"ab1251de-0ef5-48f1-b9db-0a68965651cd","Type":"ContainerStarted","Data":"b31f352abdbe939f281df6bfabadfc18e4983d94758316aef0884ceb7652169f"} Jan 31 04:08:01 crc kubenswrapper[4827]: I0131 04:08:01.705912 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:08:01 crc kubenswrapper[4827]: I0131 04:08:01.723364 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 04:08:01 crc kubenswrapper[4827]: I0131 04:08:01.723641 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 04:08:01 crc kubenswrapper[4827]: I0131 04:08:01.909861 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:08:01 crc kubenswrapper[4827]: I0131 04:08:01.910181 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96f29ca8-8584-4f9f-9a5a-f026a8772c07" containerName="ceilometer-central-agent" containerID="cri-o://4e47616a5c53b4363f494134ec1d8e0dd17e84ef1b44f91c26bb6c00a1319493" gracePeriod=30 Jan 31 04:08:01 crc kubenswrapper[4827]: I0131 04:08:01.910335 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96f29ca8-8584-4f9f-9a5a-f026a8772c07" containerName="proxy-httpd" containerID="cri-o://bb0aff0b9841d4e2ea7b0b0d809302a366b0ac59ef67ef97a79097637c4ee7d8" gracePeriod=30 Jan 31 04:08:01 crc kubenswrapper[4827]: I0131 04:08:01.910377 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96f29ca8-8584-4f9f-9a5a-f026a8772c07" containerName="sg-core" containerID="cri-o://c3eefca60adf7fb4628988a3ff3f6e8b20a170306170ac72e34b96ac30bbc393" gracePeriod=30 Jan 31 04:08:01 crc kubenswrapper[4827]: I0131 04:08:01.910409 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96f29ca8-8584-4f9f-9a5a-f026a8772c07" containerName="ceilometer-notification-agent" containerID="cri-o://4e9c9584fea5c75ae3ae32c5c512051d545bcfcd435b2105a7f10266e80b05c6" gracePeriod=30 Jan 31 04:08:01 crc kubenswrapper[4827]: I0131 04:08:01.969990 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:08:02 crc kubenswrapper[4827]: I0131 04:08:02.371183 4827 generic.go:334] "Generic (PLEG): container finished" podID="96f29ca8-8584-4f9f-9a5a-f026a8772c07" containerID="bb0aff0b9841d4e2ea7b0b0d809302a366b0ac59ef67ef97a79097637c4ee7d8" exitCode=0 Jan 31 04:08:02 crc kubenswrapper[4827]: I0131 04:08:02.371402 4827 generic.go:334] "Generic (PLEG): container finished" podID="96f29ca8-8584-4f9f-9a5a-f026a8772c07" containerID="c3eefca60adf7fb4628988a3ff3f6e8b20a170306170ac72e34b96ac30bbc393" exitCode=2 Jan 31 04:08:02 crc kubenswrapper[4827]: I0131 04:08:02.371411 4827 generic.go:334] "Generic (PLEG): container finished" podID="96f29ca8-8584-4f9f-9a5a-f026a8772c07" containerID="4e47616a5c53b4363f494134ec1d8e0dd17e84ef1b44f91c26bb6c00a1319493" exitCode=0 Jan 31 04:08:02 crc kubenswrapper[4827]: I0131 04:08:02.371270 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96f29ca8-8584-4f9f-9a5a-f026a8772c07","Type":"ContainerDied","Data":"bb0aff0b9841d4e2ea7b0b0d809302a366b0ac59ef67ef97a79097637c4ee7d8"} Jan 31 04:08:02 crc kubenswrapper[4827]: I0131 04:08:02.371503 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96f29ca8-8584-4f9f-9a5a-f026a8772c07","Type":"ContainerDied","Data":"c3eefca60adf7fb4628988a3ff3f6e8b20a170306170ac72e34b96ac30bbc393"} Jan 31 04:08:02 crc kubenswrapper[4827]: I0131 04:08:02.371516 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96f29ca8-8584-4f9f-9a5a-f026a8772c07","Type":"ContainerDied","Data":"4e47616a5c53b4363f494134ec1d8e0dd17e84ef1b44f91c26bb6c00a1319493"} Jan 31 04:08:02 crc kubenswrapper[4827]: I0131 04:08:02.377599 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" event={"ID":"ab1251de-0ef5-48f1-b9db-0a68965651cd","Type":"ContainerStarted","Data":"80344761c75942bb693cd7fd41e8e9a1493d06e3e5e228e09bfa552823754ddd"} Jan 31 04:08:02 crc kubenswrapper[4827]: I0131 04:08:02.377923 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4" containerName="nova-api-log" containerID="cri-o://b30a539a15169a32f782b45c8ae8a22d09b067d724a3479f046da8643a130484" gracePeriod=30 Jan 31 04:08:02 crc kubenswrapper[4827]: I0131 04:08:02.378025 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4" containerName="nova-api-api" containerID="cri-o://e55459fef9da43d981e1ba3636d0774bd3fe1c2186f61423ec81b11d4e7c901c" gracePeriod=30 Jan 31 04:08:03 crc kubenswrapper[4827]: I0131 04:08:03.387226 4827 generic.go:334] "Generic (PLEG): container finished" podID="d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4" containerID="b30a539a15169a32f782b45c8ae8a22d09b067d724a3479f046da8643a130484" exitCode=143 Jan 31 04:08:03 crc kubenswrapper[4827]: I0131 04:08:03.387321 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4","Type":"ContainerDied","Data":"b30a539a15169a32f782b45c8ae8a22d09b067d724a3479f046da8643a130484"} Jan 31 04:08:03 crc kubenswrapper[4827]: I0131 04:08:03.387474 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" Jan 31 04:08:05 crc kubenswrapper[4827]: I0131 04:08:05.989466 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.041103 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" podStartSLOduration=7.041081509 podStartE2EDuration="7.041081509s" podCreationTimestamp="2026-01-31 04:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:08:02.397026862 +0000 UTC m=+1275.084107311" watchObservedRunningTime="2026-01-31 04:08:06.041081509 +0000 UTC m=+1278.728161958" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.108531 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9g4b\" (UniqueName: \"kubernetes.io/projected/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-kube-api-access-n9g4b\") pod \"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4\" (UID: \"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4\") " Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.109700 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-logs\") pod \"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4\" (UID: \"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4\") " Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.110148 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-combined-ca-bundle\") pod \"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4\" (UID: \"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4\") " Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.110201 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-config-data\") pod \"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4\" (UID: \"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4\") " Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.110287 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-logs" (OuterVolumeSpecName: "logs") pod "d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4" (UID: "d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.110662 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.114088 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-kube-api-access-n9g4b" (OuterVolumeSpecName: "kube-api-access-n9g4b") pod "d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4" (UID: "d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4"). InnerVolumeSpecName "kube-api-access-n9g4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.132158 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-config-data" (OuterVolumeSpecName: "config-data") pod "d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4" (UID: "d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.142393 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4" (UID: "d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.212937 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.212970 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.212980 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9g4b\" (UniqueName: \"kubernetes.io/projected/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4-kube-api-access-n9g4b\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.419745 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.419761 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4","Type":"ContainerDied","Data":"e55459fef9da43d981e1ba3636d0774bd3fe1c2186f61423ec81b11d4e7c901c"} Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.419845 4827 scope.go:117] "RemoveContainer" containerID="e55459fef9da43d981e1ba3636d0774bd3fe1c2186f61423ec81b11d4e7c901c" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.419741 4827 generic.go:334] "Generic (PLEG): container finished" podID="d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4" containerID="e55459fef9da43d981e1ba3636d0774bd3fe1c2186f61423ec81b11d4e7c901c" exitCode=0 Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.420838 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4","Type":"ContainerDied","Data":"64c69097c2254109550ee48ab02482fa8b4a6cb304deedb13e5ff3a6235fe854"} Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.452154 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.454660 4827 scope.go:117] "RemoveContainer" containerID="b30a539a15169a32f782b45c8ae8a22d09b067d724a3479f046da8643a130484" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.460982 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.473542 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 04:08:06 crc kubenswrapper[4827]: E0131 04:08:06.474230 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4" containerName="nova-api-api" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.474428 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4" containerName="nova-api-api" Jan 31 04:08:06 crc kubenswrapper[4827]: E0131 04:08:06.474506 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4" containerName="nova-api-log" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.474579 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4" containerName="nova-api-log" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.474839 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4" containerName="nova-api-api" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.474950 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4" containerName="nova-api-log" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.476149 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.478751 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.479389 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.481791 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.485130 4827 scope.go:117] "RemoveContainer" containerID="e55459fef9da43d981e1ba3636d0774bd3fe1c2186f61423ec81b11d4e7c901c" Jan 31 04:08:06 crc kubenswrapper[4827]: E0131 04:08:06.485637 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e55459fef9da43d981e1ba3636d0774bd3fe1c2186f61423ec81b11d4e7c901c\": container with ID starting with e55459fef9da43d981e1ba3636d0774bd3fe1c2186f61423ec81b11d4e7c901c not found: ID does not exist" containerID="e55459fef9da43d981e1ba3636d0774bd3fe1c2186f61423ec81b11d4e7c901c" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.485765 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e55459fef9da43d981e1ba3636d0774bd3fe1c2186f61423ec81b11d4e7c901c"} err="failed to get container status \"e55459fef9da43d981e1ba3636d0774bd3fe1c2186f61423ec81b11d4e7c901c\": rpc error: code = NotFound desc = could not find container \"e55459fef9da43d981e1ba3636d0774bd3fe1c2186f61423ec81b11d4e7c901c\": container with ID starting with e55459fef9da43d981e1ba3636d0774bd3fe1c2186f61423ec81b11d4e7c901c not found: ID does not exist" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.485852 4827 scope.go:117] "RemoveContainer" containerID="b30a539a15169a32f782b45c8ae8a22d09b067d724a3479f046da8643a130484" Jan 31 04:08:06 crc kubenswrapper[4827]: E0131 04:08:06.486403 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b30a539a15169a32f782b45c8ae8a22d09b067d724a3479f046da8643a130484\": container with ID starting with b30a539a15169a32f782b45c8ae8a22d09b067d724a3479f046da8643a130484 not found: ID does not exist" containerID="b30a539a15169a32f782b45c8ae8a22d09b067d724a3479f046da8643a130484" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.486451 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b30a539a15169a32f782b45c8ae8a22d09b067d724a3479f046da8643a130484"} err="failed to get container status \"b30a539a15169a32f782b45c8ae8a22d09b067d724a3479f046da8643a130484\": rpc error: code = NotFound desc = could not find container \"b30a539a15169a32f782b45c8ae8a22d09b067d724a3479f046da8643a130484\": container with ID starting with b30a539a15169a32f782b45c8ae8a22d09b067d724a3479f046da8643a130484 not found: ID does not exist" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.494669 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.517459 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-config-data\") pod \"nova-api-0\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " pod="openstack/nova-api-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.517561 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74d2a678-a110-4936-ad93-99b487a52f5b-logs\") pod \"nova-api-0\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " pod="openstack/nova-api-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.517592 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " pod="openstack/nova-api-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.517610 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " pod="openstack/nova-api-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.517656 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m6fr\" (UniqueName: \"kubernetes.io/projected/74d2a678-a110-4936-ad93-99b487a52f5b-kube-api-access-9m6fr\") pod \"nova-api-0\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " pod="openstack/nova-api-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.517674 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-public-tls-certs\") pod \"nova-api-0\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " pod="openstack/nova-api-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.619059 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " pod="openstack/nova-api-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.619144 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " pod="openstack/nova-api-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.619201 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m6fr\" (UniqueName: \"kubernetes.io/projected/74d2a678-a110-4936-ad93-99b487a52f5b-kube-api-access-9m6fr\") pod \"nova-api-0\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " pod="openstack/nova-api-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.619218 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-public-tls-certs\") pod \"nova-api-0\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " pod="openstack/nova-api-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.620023 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-config-data\") pod \"nova-api-0\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " pod="openstack/nova-api-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.620131 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74d2a678-a110-4936-ad93-99b487a52f5b-logs\") pod \"nova-api-0\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " pod="openstack/nova-api-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.620456 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74d2a678-a110-4936-ad93-99b487a52f5b-logs\") pod \"nova-api-0\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " pod="openstack/nova-api-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.623386 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " pod="openstack/nova-api-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.624062 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " pod="openstack/nova-api-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.624147 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-public-tls-certs\") pod \"nova-api-0\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " pod="openstack/nova-api-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.627188 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-config-data\") pod \"nova-api-0\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " pod="openstack/nova-api-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.639627 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m6fr\" (UniqueName: \"kubernetes.io/projected/74d2a678-a110-4936-ad93-99b487a52f5b-kube-api-access-9m6fr\") pod \"nova-api-0\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " pod="openstack/nova-api-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.705336 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.722403 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.722807 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.722828 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 04:08:06 crc kubenswrapper[4827]: I0131 04:08:06.795632 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.288980 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.434747 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"74d2a678-a110-4936-ad93-99b487a52f5b","Type":"ContainerStarted","Data":"b6ddc95f8f0833ba74cc7c7b8bb88f92f756b1fc94be91e4ec1382de654838ef"} Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.451935 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.665738 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-8kkgm"] Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.666893 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8kkgm" Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.672201 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.672390 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.673059 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8kkgm"] Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.736149 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="71e93017-ead3-407d-a5fe-e5459c46e6fb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.736231 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="71e93017-ead3-407d-a5fe-e5459c46e6fb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.743131 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29a3dce1-3b8e-4c59-b718-5a8e43971938-scripts\") pod \"nova-cell1-cell-mapping-8kkgm\" (UID: \"29a3dce1-3b8e-4c59-b718-5a8e43971938\") " pod="openstack/nova-cell1-cell-mapping-8kkgm" Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.743165 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a3dce1-3b8e-4c59-b718-5a8e43971938-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8kkgm\" (UID: \"29a3dce1-3b8e-4c59-b718-5a8e43971938\") " pod="openstack/nova-cell1-cell-mapping-8kkgm" Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.743195 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29a3dce1-3b8e-4c59-b718-5a8e43971938-config-data\") pod \"nova-cell1-cell-mapping-8kkgm\" (UID: \"29a3dce1-3b8e-4c59-b718-5a8e43971938\") " pod="openstack/nova-cell1-cell-mapping-8kkgm" Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.743213 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d6qs\" (UniqueName: \"kubernetes.io/projected/29a3dce1-3b8e-4c59-b718-5a8e43971938-kube-api-access-8d6qs\") pod \"nova-cell1-cell-mapping-8kkgm\" (UID: \"29a3dce1-3b8e-4c59-b718-5a8e43971938\") " pod="openstack/nova-cell1-cell-mapping-8kkgm" Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.844484 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29a3dce1-3b8e-4c59-b718-5a8e43971938-scripts\") pod \"nova-cell1-cell-mapping-8kkgm\" (UID: \"29a3dce1-3b8e-4c59-b718-5a8e43971938\") " pod="openstack/nova-cell1-cell-mapping-8kkgm" Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.844735 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a3dce1-3b8e-4c59-b718-5a8e43971938-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8kkgm\" (UID: \"29a3dce1-3b8e-4c59-b718-5a8e43971938\") " pod="openstack/nova-cell1-cell-mapping-8kkgm" Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.844772 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29a3dce1-3b8e-4c59-b718-5a8e43971938-config-data\") pod \"nova-cell1-cell-mapping-8kkgm\" (UID: \"29a3dce1-3b8e-4c59-b718-5a8e43971938\") " pod="openstack/nova-cell1-cell-mapping-8kkgm" Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.844801 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d6qs\" (UniqueName: \"kubernetes.io/projected/29a3dce1-3b8e-4c59-b718-5a8e43971938-kube-api-access-8d6qs\") pod \"nova-cell1-cell-mapping-8kkgm\" (UID: \"29a3dce1-3b8e-4c59-b718-5a8e43971938\") " pod="openstack/nova-cell1-cell-mapping-8kkgm" Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.854396 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29a3dce1-3b8e-4c59-b718-5a8e43971938-scripts\") pod \"nova-cell1-cell-mapping-8kkgm\" (UID: \"29a3dce1-3b8e-4c59-b718-5a8e43971938\") " pod="openstack/nova-cell1-cell-mapping-8kkgm" Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.855383 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a3dce1-3b8e-4c59-b718-5a8e43971938-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8kkgm\" (UID: \"29a3dce1-3b8e-4c59-b718-5a8e43971938\") " pod="openstack/nova-cell1-cell-mapping-8kkgm" Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.863365 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29a3dce1-3b8e-4c59-b718-5a8e43971938-config-data\") pod \"nova-cell1-cell-mapping-8kkgm\" (UID: \"29a3dce1-3b8e-4c59-b718-5a8e43971938\") " pod="openstack/nova-cell1-cell-mapping-8kkgm" Jan 31 04:08:07 crc kubenswrapper[4827]: I0131 04:08:07.863973 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d6qs\" (UniqueName: \"kubernetes.io/projected/29a3dce1-3b8e-4c59-b718-5a8e43971938-kube-api-access-8d6qs\") pod \"nova-cell1-cell-mapping-8kkgm\" (UID: \"29a3dce1-3b8e-4c59-b718-5a8e43971938\") " pod="openstack/nova-cell1-cell-mapping-8kkgm" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.063177 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8kkgm" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.142768 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4" path="/var/lib/kubelet/pods/d85adf9f-9e3f-4432-9190-7d5a8d8b2fc4/volumes" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.175901 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.265367 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-config-data\") pod \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.265419 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96f29ca8-8584-4f9f-9a5a-f026a8772c07-log-httpd\") pod \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.265500 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-sg-core-conf-yaml\") pod \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.265561 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96f29ca8-8584-4f9f-9a5a-f026a8772c07-run-httpd\") pod \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.265593 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-ceilometer-tls-certs\") pod \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.265629 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-combined-ca-bundle\") pod \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.265651 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p88kv\" (UniqueName: \"kubernetes.io/projected/96f29ca8-8584-4f9f-9a5a-f026a8772c07-kube-api-access-p88kv\") pod \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.265713 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-scripts\") pod \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\" (UID: \"96f29ca8-8584-4f9f-9a5a-f026a8772c07\") " Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.268268 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96f29ca8-8584-4f9f-9a5a-f026a8772c07-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "96f29ca8-8584-4f9f-9a5a-f026a8772c07" (UID: "96f29ca8-8584-4f9f-9a5a-f026a8772c07"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.268427 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96f29ca8-8584-4f9f-9a5a-f026a8772c07-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "96f29ca8-8584-4f9f-9a5a-f026a8772c07" (UID: "96f29ca8-8584-4f9f-9a5a-f026a8772c07"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.272773 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96f29ca8-8584-4f9f-9a5a-f026a8772c07-kube-api-access-p88kv" (OuterVolumeSpecName: "kube-api-access-p88kv") pod "96f29ca8-8584-4f9f-9a5a-f026a8772c07" (UID: "96f29ca8-8584-4f9f-9a5a-f026a8772c07"). InnerVolumeSpecName "kube-api-access-p88kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.273453 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-scripts" (OuterVolumeSpecName: "scripts") pod "96f29ca8-8584-4f9f-9a5a-f026a8772c07" (UID: "96f29ca8-8584-4f9f-9a5a-f026a8772c07"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.315479 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "96f29ca8-8584-4f9f-9a5a-f026a8772c07" (UID: "96f29ca8-8584-4f9f-9a5a-f026a8772c07"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.336206 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "96f29ca8-8584-4f9f-9a5a-f026a8772c07" (UID: "96f29ca8-8584-4f9f-9a5a-f026a8772c07"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.339099 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96f29ca8-8584-4f9f-9a5a-f026a8772c07" (UID: "96f29ca8-8584-4f9f-9a5a-f026a8772c07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.362027 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-config-data" (OuterVolumeSpecName: "config-data") pod "96f29ca8-8584-4f9f-9a5a-f026a8772c07" (UID: "96f29ca8-8584-4f9f-9a5a-f026a8772c07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.368217 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.368247 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.368257 4827 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96f29ca8-8584-4f9f-9a5a-f026a8772c07-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.368265 4827 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.368274 4827 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96f29ca8-8584-4f9f-9a5a-f026a8772c07-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.368283 4827 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.368292 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f29ca8-8584-4f9f-9a5a-f026a8772c07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.368301 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p88kv\" (UniqueName: \"kubernetes.io/projected/96f29ca8-8584-4f9f-9a5a-f026a8772c07-kube-api-access-p88kv\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.445487 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"74d2a678-a110-4936-ad93-99b487a52f5b","Type":"ContainerStarted","Data":"9ebf03d2b3575e0493650b32547dc2fffcc2c28cbe91e08bb6514f605e5d1c13"} Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.445545 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"74d2a678-a110-4936-ad93-99b487a52f5b","Type":"ContainerStarted","Data":"e20aad8a466b9745f3865336a3d5c0a21c4f25ab1aced87f5f5f16afcdb3b111"} Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.450024 4827 generic.go:334] "Generic (PLEG): container finished" podID="96f29ca8-8584-4f9f-9a5a-f026a8772c07" containerID="4e9c9584fea5c75ae3ae32c5c512051d545bcfcd435b2105a7f10266e80b05c6" exitCode=0 Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.450524 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.453039 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96f29ca8-8584-4f9f-9a5a-f026a8772c07","Type":"ContainerDied","Data":"4e9c9584fea5c75ae3ae32c5c512051d545bcfcd435b2105a7f10266e80b05c6"} Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.453213 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96f29ca8-8584-4f9f-9a5a-f026a8772c07","Type":"ContainerDied","Data":"46a949499bcc6660577fc13295ef9381290b9142da52b3753d0f3bb6dc21fade"} Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.453271 4827 scope.go:117] "RemoveContainer" containerID="bb0aff0b9841d4e2ea7b0b0d809302a366b0ac59ef67ef97a79097637c4ee7d8" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.468676 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.468621229 podStartE2EDuration="2.468621229s" podCreationTimestamp="2026-01-31 04:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:08:08.462782249 +0000 UTC m=+1281.149862708" watchObservedRunningTime="2026-01-31 04:08:08.468621229 +0000 UTC m=+1281.155701668" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.488887 4827 scope.go:117] "RemoveContainer" containerID="c3eefca60adf7fb4628988a3ff3f6e8b20a170306170ac72e34b96ac30bbc393" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.507287 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.518148 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.526140 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8kkgm"] Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.526921 4827 scope.go:117] "RemoveContainer" containerID="4e9c9584fea5c75ae3ae32c5c512051d545bcfcd435b2105a7f10266e80b05c6" Jan 31 04:08:08 crc kubenswrapper[4827]: W0131 04:08:08.531910 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29a3dce1_3b8e_4c59_b718_5a8e43971938.slice/crio-f1b31e0bd8c10894415236f569281fbc499f020759e640def1c3f8e1c28dc68c WatchSource:0}: Error finding container f1b31e0bd8c10894415236f569281fbc499f020759e640def1c3f8e1c28dc68c: Status 404 returned error can't find the container with id f1b31e0bd8c10894415236f569281fbc499f020759e640def1c3f8e1c28dc68c Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.536382 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:08:08 crc kubenswrapper[4827]: E0131 04:08:08.536779 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f29ca8-8584-4f9f-9a5a-f026a8772c07" containerName="proxy-httpd" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.536795 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f29ca8-8584-4f9f-9a5a-f026a8772c07" containerName="proxy-httpd" Jan 31 04:08:08 crc kubenswrapper[4827]: E0131 04:08:08.536816 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f29ca8-8584-4f9f-9a5a-f026a8772c07" containerName="sg-core" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.536824 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f29ca8-8584-4f9f-9a5a-f026a8772c07" containerName="sg-core" Jan 31 04:08:08 crc kubenswrapper[4827]: E0131 04:08:08.536851 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f29ca8-8584-4f9f-9a5a-f026a8772c07" containerName="ceilometer-notification-agent" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.536859 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f29ca8-8584-4f9f-9a5a-f026a8772c07" containerName="ceilometer-notification-agent" Jan 31 04:08:08 crc kubenswrapper[4827]: E0131 04:08:08.536896 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f29ca8-8584-4f9f-9a5a-f026a8772c07" containerName="ceilometer-central-agent" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.536905 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f29ca8-8584-4f9f-9a5a-f026a8772c07" containerName="ceilometer-central-agent" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.537113 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="96f29ca8-8584-4f9f-9a5a-f026a8772c07" containerName="sg-core" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.537131 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="96f29ca8-8584-4f9f-9a5a-f026a8772c07" containerName="ceilometer-notification-agent" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.537155 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="96f29ca8-8584-4f9f-9a5a-f026a8772c07" containerName="ceilometer-central-agent" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.537164 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="96f29ca8-8584-4f9f-9a5a-f026a8772c07" containerName="proxy-httpd" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.539422 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.542660 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.543003 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.543162 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.572696 4827 scope.go:117] "RemoveContainer" containerID="4e47616a5c53b4363f494134ec1d8e0dd17e84ef1b44f91c26bb6c00a1319493" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.573758 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.612310 4827 scope.go:117] "RemoveContainer" containerID="bb0aff0b9841d4e2ea7b0b0d809302a366b0ac59ef67ef97a79097637c4ee7d8" Jan 31 04:08:08 crc kubenswrapper[4827]: E0131 04:08:08.613336 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb0aff0b9841d4e2ea7b0b0d809302a366b0ac59ef67ef97a79097637c4ee7d8\": container with ID starting with bb0aff0b9841d4e2ea7b0b0d809302a366b0ac59ef67ef97a79097637c4ee7d8 not found: ID does not exist" containerID="bb0aff0b9841d4e2ea7b0b0d809302a366b0ac59ef67ef97a79097637c4ee7d8" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.613391 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb0aff0b9841d4e2ea7b0b0d809302a366b0ac59ef67ef97a79097637c4ee7d8"} err="failed to get container status \"bb0aff0b9841d4e2ea7b0b0d809302a366b0ac59ef67ef97a79097637c4ee7d8\": rpc error: code = NotFound desc = could not find container \"bb0aff0b9841d4e2ea7b0b0d809302a366b0ac59ef67ef97a79097637c4ee7d8\": container with ID starting with bb0aff0b9841d4e2ea7b0b0d809302a366b0ac59ef67ef97a79097637c4ee7d8 not found: ID does not exist" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.613422 4827 scope.go:117] "RemoveContainer" containerID="c3eefca60adf7fb4628988a3ff3f6e8b20a170306170ac72e34b96ac30bbc393" Jan 31 04:08:08 crc kubenswrapper[4827]: E0131 04:08:08.613856 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3eefca60adf7fb4628988a3ff3f6e8b20a170306170ac72e34b96ac30bbc393\": container with ID starting with c3eefca60adf7fb4628988a3ff3f6e8b20a170306170ac72e34b96ac30bbc393 not found: ID does not exist" containerID="c3eefca60adf7fb4628988a3ff3f6e8b20a170306170ac72e34b96ac30bbc393" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.613918 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3eefca60adf7fb4628988a3ff3f6e8b20a170306170ac72e34b96ac30bbc393"} err="failed to get container status \"c3eefca60adf7fb4628988a3ff3f6e8b20a170306170ac72e34b96ac30bbc393\": rpc error: code = NotFound desc = could not find container \"c3eefca60adf7fb4628988a3ff3f6e8b20a170306170ac72e34b96ac30bbc393\": container with ID starting with c3eefca60adf7fb4628988a3ff3f6e8b20a170306170ac72e34b96ac30bbc393 not found: ID does not exist" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.613951 4827 scope.go:117] "RemoveContainer" containerID="4e9c9584fea5c75ae3ae32c5c512051d545bcfcd435b2105a7f10266e80b05c6" Jan 31 04:08:08 crc kubenswrapper[4827]: E0131 04:08:08.614414 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e9c9584fea5c75ae3ae32c5c512051d545bcfcd435b2105a7f10266e80b05c6\": container with ID starting with 4e9c9584fea5c75ae3ae32c5c512051d545bcfcd435b2105a7f10266e80b05c6 not found: ID does not exist" containerID="4e9c9584fea5c75ae3ae32c5c512051d545bcfcd435b2105a7f10266e80b05c6" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.614436 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e9c9584fea5c75ae3ae32c5c512051d545bcfcd435b2105a7f10266e80b05c6"} err="failed to get container status \"4e9c9584fea5c75ae3ae32c5c512051d545bcfcd435b2105a7f10266e80b05c6\": rpc error: code = NotFound desc = could not find container \"4e9c9584fea5c75ae3ae32c5c512051d545bcfcd435b2105a7f10266e80b05c6\": container with ID starting with 4e9c9584fea5c75ae3ae32c5c512051d545bcfcd435b2105a7f10266e80b05c6 not found: ID does not exist" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.614453 4827 scope.go:117] "RemoveContainer" containerID="4e47616a5c53b4363f494134ec1d8e0dd17e84ef1b44f91c26bb6c00a1319493" Jan 31 04:08:08 crc kubenswrapper[4827]: E0131 04:08:08.614781 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e47616a5c53b4363f494134ec1d8e0dd17e84ef1b44f91c26bb6c00a1319493\": container with ID starting with 4e47616a5c53b4363f494134ec1d8e0dd17e84ef1b44f91c26bb6c00a1319493 not found: ID does not exist" containerID="4e47616a5c53b4363f494134ec1d8e0dd17e84ef1b44f91c26bb6c00a1319493" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.614803 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e47616a5c53b4363f494134ec1d8e0dd17e84ef1b44f91c26bb6c00a1319493"} err="failed to get container status \"4e47616a5c53b4363f494134ec1d8e0dd17e84ef1b44f91c26bb6c00a1319493\": rpc error: code = NotFound desc = could not find container \"4e47616a5c53b4363f494134ec1d8e0dd17e84ef1b44f91c26bb6c00a1319493\": container with ID starting with 4e47616a5c53b4363f494134ec1d8e0dd17e84ef1b44f91c26bb6c00a1319493 not found: ID does not exist" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.673517 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-config-data\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.673588 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-run-httpd\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.673639 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-scripts\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.673681 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.673759 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.673805 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-log-httpd\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.673849 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfdvn\" (UniqueName: \"kubernetes.io/projected/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-kube-api-access-sfdvn\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.673914 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.775388 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-log-httpd\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.775435 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfdvn\" (UniqueName: \"kubernetes.io/projected/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-kube-api-access-sfdvn\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.775479 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.775791 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-config-data\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.775812 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-log-httpd\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.775831 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-run-httpd\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.775856 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-scripts\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.775901 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.775959 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.776864 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-run-httpd\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.780321 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.780732 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.781241 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-config-data\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.781394 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-scripts\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.787016 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.791418 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfdvn\" (UniqueName: \"kubernetes.io/projected/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-kube-api-access-sfdvn\") pod \"ceilometer-0\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " pod="openstack/ceilometer-0" Jan 31 04:08:08 crc kubenswrapper[4827]: I0131 04:08:08.857503 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:08:09 crc kubenswrapper[4827]: W0131 04:08:09.128602 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0cb03cb_1552_4f0e_b99c_2d5188aaf7a6.slice/crio-d5da420e92bacb178364b7e23ee7fb1b8a059ee68061729a7330940b940ebd60 WatchSource:0}: Error finding container d5da420e92bacb178364b7e23ee7fb1b8a059ee68061729a7330940b940ebd60: Status 404 returned error can't find the container with id d5da420e92bacb178364b7e23ee7fb1b8a059ee68061729a7330940b940ebd60 Jan 31 04:08:09 crc kubenswrapper[4827]: I0131 04:08:09.132237 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:08:09 crc kubenswrapper[4827]: I0131 04:08:09.132580 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:08:09 crc kubenswrapper[4827]: I0131 04:08:09.466099 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6","Type":"ContainerStarted","Data":"d5da420e92bacb178364b7e23ee7fb1b8a059ee68061729a7330940b940ebd60"} Jan 31 04:08:09 crc kubenswrapper[4827]: I0131 04:08:09.468750 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8kkgm" event={"ID":"29a3dce1-3b8e-4c59-b718-5a8e43971938","Type":"ContainerStarted","Data":"d5805197006be6d2bafe7e04d72ed4bbfe45f5595fbd63017fb55eda1f0d817f"} Jan 31 04:08:09 crc kubenswrapper[4827]: I0131 04:08:09.468775 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8kkgm" event={"ID":"29a3dce1-3b8e-4c59-b718-5a8e43971938","Type":"ContainerStarted","Data":"f1b31e0bd8c10894415236f569281fbc499f020759e640def1c3f8e1c28dc68c"} Jan 31 04:08:09 crc kubenswrapper[4827]: I0131 04:08:09.499501 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-8kkgm" podStartSLOduration=2.499481924 podStartE2EDuration="2.499481924s" podCreationTimestamp="2026-01-31 04:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:08:09.491909851 +0000 UTC m=+1282.178990310" watchObservedRunningTime="2026-01-31 04:08:09.499481924 +0000 UTC m=+1282.186562373" Jan 31 04:08:09 crc kubenswrapper[4827]: I0131 04:08:09.861043 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" Jan 31 04:08:09 crc kubenswrapper[4827]: I0131 04:08:09.945927 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-24dd9"] Jan 31 04:08:09 crc kubenswrapper[4827]: I0131 04:08:09.946165 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566b5b7845-24dd9" podUID="477674ec-e729-48e8-801c-497f49e9a6c8" containerName="dnsmasq-dns" containerID="cri-o://7dce2b51248bd789b174a40f27675b78519c1b3ad09660a2bbe51d2525bea123" gracePeriod=10 Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.136365 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96f29ca8-8584-4f9f-9a5a-f026a8772c07" path="/var/lib/kubelet/pods/96f29ca8-8584-4f9f-9a5a-f026a8772c07/volumes" Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.477850 4827 generic.go:334] "Generic (PLEG): container finished" podID="477674ec-e729-48e8-801c-497f49e9a6c8" containerID="7dce2b51248bd789b174a40f27675b78519c1b3ad09660a2bbe51d2525bea123" exitCode=0 Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.477911 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-24dd9" event={"ID":"477674ec-e729-48e8-801c-497f49e9a6c8","Type":"ContainerDied","Data":"7dce2b51248bd789b174a40f27675b78519c1b3ad09660a2bbe51d2525bea123"} Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.478172 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-24dd9" event={"ID":"477674ec-e729-48e8-801c-497f49e9a6c8","Type":"ContainerDied","Data":"c6f09ce769715977ec4a90d80f5cb6d1fed5afa316cb89315b5284b75ab781f1"} Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.478187 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6f09ce769715977ec4a90d80f5cb6d1fed5afa316cb89315b5284b75ab781f1" Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.478478 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-24dd9" Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.480214 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6","Type":"ContainerStarted","Data":"926577518b189fddd66c487d1e2fa4cdc3275b9114ebb9ac938c0d080136e956"} Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.628965 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-dns-svc\") pod \"477674ec-e729-48e8-801c-497f49e9a6c8\" (UID: \"477674ec-e729-48e8-801c-497f49e9a6c8\") " Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.629153 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-config\") pod \"477674ec-e729-48e8-801c-497f49e9a6c8\" (UID: \"477674ec-e729-48e8-801c-497f49e9a6c8\") " Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.629241 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-ovsdbserver-sb\") pod \"477674ec-e729-48e8-801c-497f49e9a6c8\" (UID: \"477674ec-e729-48e8-801c-497f49e9a6c8\") " Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.629289 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhhhm\" (UniqueName: \"kubernetes.io/projected/477674ec-e729-48e8-801c-497f49e9a6c8-kube-api-access-qhhhm\") pod \"477674ec-e729-48e8-801c-497f49e9a6c8\" (UID: \"477674ec-e729-48e8-801c-497f49e9a6c8\") " Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.629338 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-ovsdbserver-nb\") pod \"477674ec-e729-48e8-801c-497f49e9a6c8\" (UID: \"477674ec-e729-48e8-801c-497f49e9a6c8\") " Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.639104 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/477674ec-e729-48e8-801c-497f49e9a6c8-kube-api-access-qhhhm" (OuterVolumeSpecName: "kube-api-access-qhhhm") pod "477674ec-e729-48e8-801c-497f49e9a6c8" (UID: "477674ec-e729-48e8-801c-497f49e9a6c8"). InnerVolumeSpecName "kube-api-access-qhhhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.674966 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "477674ec-e729-48e8-801c-497f49e9a6c8" (UID: "477674ec-e729-48e8-801c-497f49e9a6c8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.677682 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-config" (OuterVolumeSpecName: "config") pod "477674ec-e729-48e8-801c-497f49e9a6c8" (UID: "477674ec-e729-48e8-801c-497f49e9a6c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.686976 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "477674ec-e729-48e8-801c-497f49e9a6c8" (UID: "477674ec-e729-48e8-801c-497f49e9a6c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.690090 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "477674ec-e729-48e8-801c-497f49e9a6c8" (UID: "477674ec-e729-48e8-801c-497f49e9a6c8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.733148 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhhhm\" (UniqueName: \"kubernetes.io/projected/477674ec-e729-48e8-801c-497f49e9a6c8-kube-api-access-qhhhm\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.733195 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.733208 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.733220 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:10 crc kubenswrapper[4827]: I0131 04:08:10.733231 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/477674ec-e729-48e8-801c-497f49e9a6c8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:11 crc kubenswrapper[4827]: I0131 04:08:11.490589 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-24dd9" Jan 31 04:08:11 crc kubenswrapper[4827]: I0131 04:08:11.490582 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6","Type":"ContainerStarted","Data":"3b97f953a7dea7bb930a2a1ee9c156ce33276deda51c9ac7dfd6f9693f42f118"} Jan 31 04:08:11 crc kubenswrapper[4827]: I0131 04:08:11.491099 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6","Type":"ContainerStarted","Data":"ddb5b391690a456c290c273d92aa052f0fb1d9214d761b0c7cc33464d32dac58"} Jan 31 04:08:11 crc kubenswrapper[4827]: I0131 04:08:11.521559 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-24dd9"] Jan 31 04:08:11 crc kubenswrapper[4827]: I0131 04:08:11.527818 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-24dd9"] Jan 31 04:08:12 crc kubenswrapper[4827]: I0131 04:08:12.119963 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="477674ec-e729-48e8-801c-497f49e9a6c8" path="/var/lib/kubelet/pods/477674ec-e729-48e8-801c-497f49e9a6c8/volumes" Jan 31 04:08:13 crc kubenswrapper[4827]: I0131 04:08:13.508176 4827 generic.go:334] "Generic (PLEG): container finished" podID="29a3dce1-3b8e-4c59-b718-5a8e43971938" containerID="d5805197006be6d2bafe7e04d72ed4bbfe45f5595fbd63017fb55eda1f0d817f" exitCode=0 Jan 31 04:08:13 crc kubenswrapper[4827]: I0131 04:08:13.508257 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8kkgm" event={"ID":"29a3dce1-3b8e-4c59-b718-5a8e43971938","Type":"ContainerDied","Data":"d5805197006be6d2bafe7e04d72ed4bbfe45f5595fbd63017fb55eda1f0d817f"} Jan 31 04:08:14 crc kubenswrapper[4827]: I0131 04:08:14.519140 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6","Type":"ContainerStarted","Data":"5c74f3103ea2f6837aa119c413ff6228a7c4193960aa96ff7c2893aea1baf863"} Jan 31 04:08:14 crc kubenswrapper[4827]: I0131 04:08:14.519198 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 04:08:14 crc kubenswrapper[4827]: I0131 04:08:14.550013 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.01778577 podStartE2EDuration="6.549995925s" podCreationTimestamp="2026-01-31 04:08:08 +0000 UTC" firstStartedPulling="2026-01-31 04:08:09.132016285 +0000 UTC m=+1281.819096724" lastFinishedPulling="2026-01-31 04:08:13.6642264 +0000 UTC m=+1286.351306879" observedRunningTime="2026-01-31 04:08:14.543155305 +0000 UTC m=+1287.230235754" watchObservedRunningTime="2026-01-31 04:08:14.549995925 +0000 UTC m=+1287.237076374" Jan 31 04:08:14 crc kubenswrapper[4827]: I0131 04:08:14.856704 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8kkgm" Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.002512 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29a3dce1-3b8e-4c59-b718-5a8e43971938-config-data\") pod \"29a3dce1-3b8e-4c59-b718-5a8e43971938\" (UID: \"29a3dce1-3b8e-4c59-b718-5a8e43971938\") " Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.002596 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d6qs\" (UniqueName: \"kubernetes.io/projected/29a3dce1-3b8e-4c59-b718-5a8e43971938-kube-api-access-8d6qs\") pod \"29a3dce1-3b8e-4c59-b718-5a8e43971938\" (UID: \"29a3dce1-3b8e-4c59-b718-5a8e43971938\") " Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.002636 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29a3dce1-3b8e-4c59-b718-5a8e43971938-scripts\") pod \"29a3dce1-3b8e-4c59-b718-5a8e43971938\" (UID: \"29a3dce1-3b8e-4c59-b718-5a8e43971938\") " Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.002681 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a3dce1-3b8e-4c59-b718-5a8e43971938-combined-ca-bundle\") pod \"29a3dce1-3b8e-4c59-b718-5a8e43971938\" (UID: \"29a3dce1-3b8e-4c59-b718-5a8e43971938\") " Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.023095 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29a3dce1-3b8e-4c59-b718-5a8e43971938-scripts" (OuterVolumeSpecName: "scripts") pod "29a3dce1-3b8e-4c59-b718-5a8e43971938" (UID: "29a3dce1-3b8e-4c59-b718-5a8e43971938"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.023808 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a3dce1-3b8e-4c59-b718-5a8e43971938-kube-api-access-8d6qs" (OuterVolumeSpecName: "kube-api-access-8d6qs") pod "29a3dce1-3b8e-4c59-b718-5a8e43971938" (UID: "29a3dce1-3b8e-4c59-b718-5a8e43971938"). InnerVolumeSpecName "kube-api-access-8d6qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.033257 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29a3dce1-3b8e-4c59-b718-5a8e43971938-config-data" (OuterVolumeSpecName: "config-data") pod "29a3dce1-3b8e-4c59-b718-5a8e43971938" (UID: "29a3dce1-3b8e-4c59-b718-5a8e43971938"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.046092 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29a3dce1-3b8e-4c59-b718-5a8e43971938-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29a3dce1-3b8e-4c59-b718-5a8e43971938" (UID: "29a3dce1-3b8e-4c59-b718-5a8e43971938"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.105430 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29a3dce1-3b8e-4c59-b718-5a8e43971938-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.105469 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d6qs\" (UniqueName: \"kubernetes.io/projected/29a3dce1-3b8e-4c59-b718-5a8e43971938-kube-api-access-8d6qs\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.105484 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29a3dce1-3b8e-4c59-b718-5a8e43971938-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.105498 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a3dce1-3b8e-4c59-b718-5a8e43971938-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.533598 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8kkgm" Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.535452 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8kkgm" event={"ID":"29a3dce1-3b8e-4c59-b718-5a8e43971938","Type":"ContainerDied","Data":"f1b31e0bd8c10894415236f569281fbc499f020759e640def1c3f8e1c28dc68c"} Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.535492 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1b31e0bd8c10894415236f569281fbc499f020759e640def1c3f8e1c28dc68c" Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.742394 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.742659 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4e012783-53bb-44bb-a946-c664a0db3587" containerName="nova-scheduler-scheduler" containerID="cri-o://bcca0921bbffa8547e9dc1ecbf7761aab6925b0c44e5fb8f8b45cfb5db033d4c" gracePeriod=30 Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.760297 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.760607 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="74d2a678-a110-4936-ad93-99b487a52f5b" containerName="nova-api-log" containerID="cri-o://e20aad8a466b9745f3865336a3d5c0a21c4f25ab1aced87f5f5f16afcdb3b111" gracePeriod=30 Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.761079 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="74d2a678-a110-4936-ad93-99b487a52f5b" containerName="nova-api-api" containerID="cri-o://9ebf03d2b3575e0493650b32547dc2fffcc2c28cbe91e08bb6514f605e5d1c13" gracePeriod=30 Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.778314 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.778554 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="71e93017-ead3-407d-a5fe-e5459c46e6fb" containerName="nova-metadata-log" containerID="cri-o://a34e071a98c5c5f764478f28a28a1ac75c96015ff85c2a0a75b859da77df50fb" gracePeriod=30 Jan 31 04:08:15 crc kubenswrapper[4827]: I0131 04:08:15.778612 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="71e93017-ead3-407d-a5fe-e5459c46e6fb" containerName="nova-metadata-metadata" containerID="cri-o://3f090c24e952123933a71c45b5b0bb33654e9b9faff2186dce090d2bb13c98f9" gracePeriod=30 Jan 31 04:08:15 crc kubenswrapper[4827]: E0131 04:08:15.805126 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29a3dce1_3b8e_4c59_b718_5a8e43971938.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74d2a678_a110_4936_ad93_99b487a52f5b.slice/crio-conmon-e20aad8a466b9745f3865336a3d5c0a21c4f25ab1aced87f5f5f16afcdb3b111.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:08:16 crc kubenswrapper[4827]: I0131 04:08:16.541451 4827 generic.go:334] "Generic (PLEG): container finished" podID="71e93017-ead3-407d-a5fe-e5459c46e6fb" containerID="a34e071a98c5c5f764478f28a28a1ac75c96015ff85c2a0a75b859da77df50fb" exitCode=143 Jan 31 04:08:16 crc kubenswrapper[4827]: I0131 04:08:16.541653 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"71e93017-ead3-407d-a5fe-e5459c46e6fb","Type":"ContainerDied","Data":"a34e071a98c5c5f764478f28a28a1ac75c96015ff85c2a0a75b859da77df50fb"} Jan 31 04:08:16 crc kubenswrapper[4827]: I0131 04:08:16.543004 4827 generic.go:334] "Generic (PLEG): container finished" podID="74d2a678-a110-4936-ad93-99b487a52f5b" containerID="9ebf03d2b3575e0493650b32547dc2fffcc2c28cbe91e08bb6514f605e5d1c13" exitCode=0 Jan 31 04:08:16 crc kubenswrapper[4827]: I0131 04:08:16.543019 4827 generic.go:334] "Generic (PLEG): container finished" podID="74d2a678-a110-4936-ad93-99b487a52f5b" containerID="e20aad8a466b9745f3865336a3d5c0a21c4f25ab1aced87f5f5f16afcdb3b111" exitCode=143 Jan 31 04:08:16 crc kubenswrapper[4827]: I0131 04:08:16.543298 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"74d2a678-a110-4936-ad93-99b487a52f5b","Type":"ContainerDied","Data":"9ebf03d2b3575e0493650b32547dc2fffcc2c28cbe91e08bb6514f605e5d1c13"} Jan 31 04:08:16 crc kubenswrapper[4827]: I0131 04:08:16.543327 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"74d2a678-a110-4936-ad93-99b487a52f5b","Type":"ContainerDied","Data":"e20aad8a466b9745f3865336a3d5c0a21c4f25ab1aced87f5f5f16afcdb3b111"} Jan 31 04:08:16 crc kubenswrapper[4827]: I0131 04:08:16.812966 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:08:16 crc kubenswrapper[4827]: I0131 04:08:16.936333 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-config-data\") pod \"74d2a678-a110-4936-ad93-99b487a52f5b\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " Jan 31 04:08:16 crc kubenswrapper[4827]: I0131 04:08:16.936482 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m6fr\" (UniqueName: \"kubernetes.io/projected/74d2a678-a110-4936-ad93-99b487a52f5b-kube-api-access-9m6fr\") pod \"74d2a678-a110-4936-ad93-99b487a52f5b\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " Jan 31 04:08:16 crc kubenswrapper[4827]: I0131 04:08:16.941281 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-internal-tls-certs\") pod \"74d2a678-a110-4936-ad93-99b487a52f5b\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " Jan 31 04:08:16 crc kubenswrapper[4827]: I0131 04:08:16.941651 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-combined-ca-bundle\") pod \"74d2a678-a110-4936-ad93-99b487a52f5b\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " Jan 31 04:08:16 crc kubenswrapper[4827]: I0131 04:08:16.941710 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74d2a678-a110-4936-ad93-99b487a52f5b-logs\") pod \"74d2a678-a110-4936-ad93-99b487a52f5b\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " Jan 31 04:08:16 crc kubenswrapper[4827]: I0131 04:08:16.941764 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-public-tls-certs\") pod \"74d2a678-a110-4936-ad93-99b487a52f5b\" (UID: \"74d2a678-a110-4936-ad93-99b487a52f5b\") " Jan 31 04:08:16 crc kubenswrapper[4827]: I0131 04:08:16.946947 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74d2a678-a110-4936-ad93-99b487a52f5b-logs" (OuterVolumeSpecName: "logs") pod "74d2a678-a110-4936-ad93-99b487a52f5b" (UID: "74d2a678-a110-4936-ad93-99b487a52f5b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:08:16 crc kubenswrapper[4827]: I0131 04:08:16.947476 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74d2a678-a110-4936-ad93-99b487a52f5b-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:16 crc kubenswrapper[4827]: I0131 04:08:16.958107 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74d2a678-a110-4936-ad93-99b487a52f5b-kube-api-access-9m6fr" (OuterVolumeSpecName: "kube-api-access-9m6fr") pod "74d2a678-a110-4936-ad93-99b487a52f5b" (UID: "74d2a678-a110-4936-ad93-99b487a52f5b"). InnerVolumeSpecName "kube-api-access-9m6fr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:16 crc kubenswrapper[4827]: I0131 04:08:16.985564 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74d2a678-a110-4936-ad93-99b487a52f5b" (UID: "74d2a678-a110-4936-ad93-99b487a52f5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:16 crc kubenswrapper[4827]: I0131 04:08:16.988310 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-config-data" (OuterVolumeSpecName: "config-data") pod "74d2a678-a110-4936-ad93-99b487a52f5b" (UID: "74d2a678-a110-4936-ad93-99b487a52f5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:16 crc kubenswrapper[4827]: I0131 04:08:16.997030 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "74d2a678-a110-4936-ad93-99b487a52f5b" (UID: "74d2a678-a110-4936-ad93-99b487a52f5b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.003119 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "74d2a678-a110-4936-ad93-99b487a52f5b" (UID: "74d2a678-a110-4936-ad93-99b487a52f5b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.049589 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.049865 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m6fr\" (UniqueName: \"kubernetes.io/projected/74d2a678-a110-4936-ad93-99b487a52f5b-kube-api-access-9m6fr\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.050025 4827 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.050454 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.050589 4827 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d2a678-a110-4936-ad93-99b487a52f5b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.371326 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.371395 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:08:17 crc kubenswrapper[4827]: E0131 04:08:17.517233 4827 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bcca0921bbffa8547e9dc1ecbf7761aab6925b0c44e5fb8f8b45cfb5db033d4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 04:08:17 crc kubenswrapper[4827]: E0131 04:08:17.518491 4827 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bcca0921bbffa8547e9dc1ecbf7761aab6925b0c44e5fb8f8b45cfb5db033d4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 04:08:17 crc kubenswrapper[4827]: E0131 04:08:17.519717 4827 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bcca0921bbffa8547e9dc1ecbf7761aab6925b0c44e5fb8f8b45cfb5db033d4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 04:08:17 crc kubenswrapper[4827]: E0131 04:08:17.519782 4827 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4e012783-53bb-44bb-a946-c664a0db3587" containerName="nova-scheduler-scheduler" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.557620 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"74d2a678-a110-4936-ad93-99b487a52f5b","Type":"ContainerDied","Data":"b6ddc95f8f0833ba74cc7c7b8bb88f92f756b1fc94be91e4ec1382de654838ef"} Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.557676 4827 scope.go:117] "RemoveContainer" containerID="9ebf03d2b3575e0493650b32547dc2fffcc2c28cbe91e08bb6514f605e5d1c13" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.557702 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.592373 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.596454 4827 scope.go:117] "RemoveContainer" containerID="e20aad8a466b9745f3865336a3d5c0a21c4f25ab1aced87f5f5f16afcdb3b111" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.602865 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.619426 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 04:08:17 crc kubenswrapper[4827]: E0131 04:08:17.619840 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d2a678-a110-4936-ad93-99b487a52f5b" containerName="nova-api-api" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.619859 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d2a678-a110-4936-ad93-99b487a52f5b" containerName="nova-api-api" Jan 31 04:08:17 crc kubenswrapper[4827]: E0131 04:08:17.619894 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a3dce1-3b8e-4c59-b718-5a8e43971938" containerName="nova-manage" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.619905 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a3dce1-3b8e-4c59-b718-5a8e43971938" containerName="nova-manage" Jan 31 04:08:17 crc kubenswrapper[4827]: E0131 04:08:17.619921 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="477674ec-e729-48e8-801c-497f49e9a6c8" containerName="dnsmasq-dns" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.619929 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="477674ec-e729-48e8-801c-497f49e9a6c8" containerName="dnsmasq-dns" Jan 31 04:08:17 crc kubenswrapper[4827]: E0131 04:08:17.619945 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="477674ec-e729-48e8-801c-497f49e9a6c8" containerName="init" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.619953 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="477674ec-e729-48e8-801c-497f49e9a6c8" containerName="init" Jan 31 04:08:17 crc kubenswrapper[4827]: E0131 04:08:17.619981 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d2a678-a110-4936-ad93-99b487a52f5b" containerName="nova-api-log" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.619989 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d2a678-a110-4936-ad93-99b487a52f5b" containerName="nova-api-log" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.620180 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="477674ec-e729-48e8-801c-497f49e9a6c8" containerName="dnsmasq-dns" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.620202 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d2a678-a110-4936-ad93-99b487a52f5b" containerName="nova-api-log" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.620216 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="29a3dce1-3b8e-4c59-b718-5a8e43971938" containerName="nova-manage" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.620231 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d2a678-a110-4936-ad93-99b487a52f5b" containerName="nova-api-api" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.621325 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.623568 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.623863 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.624037 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.655309 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.768134 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/461e6e42-8412-4ab9-aa9a-02b27965961d-logs\") pod \"nova-api-0\" (UID: \"461e6e42-8412-4ab9-aa9a-02b27965961d\") " pod="openstack/nova-api-0" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.768208 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/461e6e42-8412-4ab9-aa9a-02b27965961d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"461e6e42-8412-4ab9-aa9a-02b27965961d\") " pod="openstack/nova-api-0" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.768268 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/461e6e42-8412-4ab9-aa9a-02b27965961d-public-tls-certs\") pod \"nova-api-0\" (UID: \"461e6e42-8412-4ab9-aa9a-02b27965961d\") " pod="openstack/nova-api-0" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.768910 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9xtd\" (UniqueName: \"kubernetes.io/projected/461e6e42-8412-4ab9-aa9a-02b27965961d-kube-api-access-r9xtd\") pod \"nova-api-0\" (UID: \"461e6e42-8412-4ab9-aa9a-02b27965961d\") " pod="openstack/nova-api-0" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.768956 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461e6e42-8412-4ab9-aa9a-02b27965961d-config-data\") pod \"nova-api-0\" (UID: \"461e6e42-8412-4ab9-aa9a-02b27965961d\") " pod="openstack/nova-api-0" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.768981 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461e6e42-8412-4ab9-aa9a-02b27965961d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"461e6e42-8412-4ab9-aa9a-02b27965961d\") " pod="openstack/nova-api-0" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.870340 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/461e6e42-8412-4ab9-aa9a-02b27965961d-logs\") pod \"nova-api-0\" (UID: \"461e6e42-8412-4ab9-aa9a-02b27965961d\") " pod="openstack/nova-api-0" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.870406 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/461e6e42-8412-4ab9-aa9a-02b27965961d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"461e6e42-8412-4ab9-aa9a-02b27965961d\") " pod="openstack/nova-api-0" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.870461 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/461e6e42-8412-4ab9-aa9a-02b27965961d-public-tls-certs\") pod \"nova-api-0\" (UID: \"461e6e42-8412-4ab9-aa9a-02b27965961d\") " pod="openstack/nova-api-0" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.870489 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9xtd\" (UniqueName: \"kubernetes.io/projected/461e6e42-8412-4ab9-aa9a-02b27965961d-kube-api-access-r9xtd\") pod \"nova-api-0\" (UID: \"461e6e42-8412-4ab9-aa9a-02b27965961d\") " pod="openstack/nova-api-0" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.870527 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461e6e42-8412-4ab9-aa9a-02b27965961d-config-data\") pod \"nova-api-0\" (UID: \"461e6e42-8412-4ab9-aa9a-02b27965961d\") " pod="openstack/nova-api-0" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.870549 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461e6e42-8412-4ab9-aa9a-02b27965961d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"461e6e42-8412-4ab9-aa9a-02b27965961d\") " pod="openstack/nova-api-0" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.870872 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/461e6e42-8412-4ab9-aa9a-02b27965961d-logs\") pod \"nova-api-0\" (UID: \"461e6e42-8412-4ab9-aa9a-02b27965961d\") " pod="openstack/nova-api-0" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.875537 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461e6e42-8412-4ab9-aa9a-02b27965961d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"461e6e42-8412-4ab9-aa9a-02b27965961d\") " pod="openstack/nova-api-0" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.875563 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/461e6e42-8412-4ab9-aa9a-02b27965961d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"461e6e42-8412-4ab9-aa9a-02b27965961d\") " pod="openstack/nova-api-0" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.876099 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/461e6e42-8412-4ab9-aa9a-02b27965961d-public-tls-certs\") pod \"nova-api-0\" (UID: \"461e6e42-8412-4ab9-aa9a-02b27965961d\") " pod="openstack/nova-api-0" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.877650 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461e6e42-8412-4ab9-aa9a-02b27965961d-config-data\") pod \"nova-api-0\" (UID: \"461e6e42-8412-4ab9-aa9a-02b27965961d\") " pod="openstack/nova-api-0" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.894039 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9xtd\" (UniqueName: \"kubernetes.io/projected/461e6e42-8412-4ab9-aa9a-02b27965961d-kube-api-access-r9xtd\") pod \"nova-api-0\" (UID: \"461e6e42-8412-4ab9-aa9a-02b27965961d\") " pod="openstack/nova-api-0" Jan 31 04:08:17 crc kubenswrapper[4827]: I0131 04:08:17.945902 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:08:18 crc kubenswrapper[4827]: I0131 04:08:18.130016 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74d2a678-a110-4936-ad93-99b487a52f5b" path="/var/lib/kubelet/pods/74d2a678-a110-4936-ad93-99b487a52f5b/volumes" Jan 31 04:08:18 crc kubenswrapper[4827]: I0131 04:08:18.374251 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:08:18 crc kubenswrapper[4827]: W0131 04:08:18.381610 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod461e6e42_8412_4ab9_aa9a_02b27965961d.slice/crio-fbd2c05ecedb67c5f2c13838ccdfe80041dee352b043751a491a50bd8cfe0faf WatchSource:0}: Error finding container fbd2c05ecedb67c5f2c13838ccdfe80041dee352b043751a491a50bd8cfe0faf: Status 404 returned error can't find the container with id fbd2c05ecedb67c5f2c13838ccdfe80041dee352b043751a491a50bd8cfe0faf Jan 31 04:08:18 crc kubenswrapper[4827]: I0131 04:08:18.569137 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"461e6e42-8412-4ab9-aa9a-02b27965961d","Type":"ContainerStarted","Data":"b4561fda4242fd88098063c806da85377331196eb3ebec22cbd0d992b0c48482"} Jan 31 04:08:18 crc kubenswrapper[4827]: I0131 04:08:18.569183 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"461e6e42-8412-4ab9-aa9a-02b27965961d","Type":"ContainerStarted","Data":"fbd2c05ecedb67c5f2c13838ccdfe80041dee352b043751a491a50bd8cfe0faf"} Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.348897 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.497425 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/71e93017-ead3-407d-a5fe-e5459c46e6fb-nova-metadata-tls-certs\") pod \"71e93017-ead3-407d-a5fe-e5459c46e6fb\" (UID: \"71e93017-ead3-407d-a5fe-e5459c46e6fb\") " Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.497824 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e93017-ead3-407d-a5fe-e5459c46e6fb-config-data\") pod \"71e93017-ead3-407d-a5fe-e5459c46e6fb\" (UID: \"71e93017-ead3-407d-a5fe-e5459c46e6fb\") " Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.497914 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71e93017-ead3-407d-a5fe-e5459c46e6fb-logs\") pod \"71e93017-ead3-407d-a5fe-e5459c46e6fb\" (UID: \"71e93017-ead3-407d-a5fe-e5459c46e6fb\") " Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.498017 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2vsw\" (UniqueName: \"kubernetes.io/projected/71e93017-ead3-407d-a5fe-e5459c46e6fb-kube-api-access-n2vsw\") pod \"71e93017-ead3-407d-a5fe-e5459c46e6fb\" (UID: \"71e93017-ead3-407d-a5fe-e5459c46e6fb\") " Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.498073 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e93017-ead3-407d-a5fe-e5459c46e6fb-combined-ca-bundle\") pod \"71e93017-ead3-407d-a5fe-e5459c46e6fb\" (UID: \"71e93017-ead3-407d-a5fe-e5459c46e6fb\") " Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.498967 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71e93017-ead3-407d-a5fe-e5459c46e6fb-logs" (OuterVolumeSpecName: "logs") pod "71e93017-ead3-407d-a5fe-e5459c46e6fb" (UID: "71e93017-ead3-407d-a5fe-e5459c46e6fb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.505003 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71e93017-ead3-407d-a5fe-e5459c46e6fb-kube-api-access-n2vsw" (OuterVolumeSpecName: "kube-api-access-n2vsw") pod "71e93017-ead3-407d-a5fe-e5459c46e6fb" (UID: "71e93017-ead3-407d-a5fe-e5459c46e6fb"). InnerVolumeSpecName "kube-api-access-n2vsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.521782 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71e93017-ead3-407d-a5fe-e5459c46e6fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71e93017-ead3-407d-a5fe-e5459c46e6fb" (UID: "71e93017-ead3-407d-a5fe-e5459c46e6fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.523139 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71e93017-ead3-407d-a5fe-e5459c46e6fb-config-data" (OuterVolumeSpecName: "config-data") pod "71e93017-ead3-407d-a5fe-e5459c46e6fb" (UID: "71e93017-ead3-407d-a5fe-e5459c46e6fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.543641 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71e93017-ead3-407d-a5fe-e5459c46e6fb-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "71e93017-ead3-407d-a5fe-e5459c46e6fb" (UID: "71e93017-ead3-407d-a5fe-e5459c46e6fb"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.581203 4827 generic.go:334] "Generic (PLEG): container finished" podID="71e93017-ead3-407d-a5fe-e5459c46e6fb" containerID="3f090c24e952123933a71c45b5b0bb33654e9b9faff2186dce090d2bb13c98f9" exitCode=0 Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.581288 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"71e93017-ead3-407d-a5fe-e5459c46e6fb","Type":"ContainerDied","Data":"3f090c24e952123933a71c45b5b0bb33654e9b9faff2186dce090d2bb13c98f9"} Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.581315 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.581343 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"71e93017-ead3-407d-a5fe-e5459c46e6fb","Type":"ContainerDied","Data":"b8f48c6c6df2c35bd94f0078b7285888f7899bb56d7037518a486e3b57d84eff"} Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.581366 4827 scope.go:117] "RemoveContainer" containerID="3f090c24e952123933a71c45b5b0bb33654e9b9faff2186dce090d2bb13c98f9" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.584331 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"461e6e42-8412-4ab9-aa9a-02b27965961d","Type":"ContainerStarted","Data":"86839c2564bb9caa312b7c165091108a4c1b96438d41fe2bd91f5784f95147eb"} Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.604597 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2vsw\" (UniqueName: \"kubernetes.io/projected/71e93017-ead3-407d-a5fe-e5459c46e6fb-kube-api-access-n2vsw\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.604639 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e93017-ead3-407d-a5fe-e5459c46e6fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.604653 4827 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/71e93017-ead3-407d-a5fe-e5459c46e6fb-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.604665 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e93017-ead3-407d-a5fe-e5459c46e6fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.604676 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71e93017-ead3-407d-a5fe-e5459c46e6fb-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.613078 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.613056864 podStartE2EDuration="2.613056864s" podCreationTimestamp="2026-01-31 04:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:08:19.600592389 +0000 UTC m=+1292.287672858" watchObservedRunningTime="2026-01-31 04:08:19.613056864 +0000 UTC m=+1292.300137313" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.621160 4827 scope.go:117] "RemoveContainer" containerID="a34e071a98c5c5f764478f28a28a1ac75c96015ff85c2a0a75b859da77df50fb" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.636061 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.649007 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.655066 4827 scope.go:117] "RemoveContainer" containerID="3f090c24e952123933a71c45b5b0bb33654e9b9faff2186dce090d2bb13c98f9" Jan 31 04:08:19 crc kubenswrapper[4827]: E0131 04:08:19.655568 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f090c24e952123933a71c45b5b0bb33654e9b9faff2186dce090d2bb13c98f9\": container with ID starting with 3f090c24e952123933a71c45b5b0bb33654e9b9faff2186dce090d2bb13c98f9 not found: ID does not exist" containerID="3f090c24e952123933a71c45b5b0bb33654e9b9faff2186dce090d2bb13c98f9" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.655654 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f090c24e952123933a71c45b5b0bb33654e9b9faff2186dce090d2bb13c98f9"} err="failed to get container status \"3f090c24e952123933a71c45b5b0bb33654e9b9faff2186dce090d2bb13c98f9\": rpc error: code = NotFound desc = could not find container \"3f090c24e952123933a71c45b5b0bb33654e9b9faff2186dce090d2bb13c98f9\": container with ID starting with 3f090c24e952123933a71c45b5b0bb33654e9b9faff2186dce090d2bb13c98f9 not found: ID does not exist" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.655726 4827 scope.go:117] "RemoveContainer" containerID="a34e071a98c5c5f764478f28a28a1ac75c96015ff85c2a0a75b859da77df50fb" Jan 31 04:08:19 crc kubenswrapper[4827]: E0131 04:08:19.656161 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a34e071a98c5c5f764478f28a28a1ac75c96015ff85c2a0a75b859da77df50fb\": container with ID starting with a34e071a98c5c5f764478f28a28a1ac75c96015ff85c2a0a75b859da77df50fb not found: ID does not exist" containerID="a34e071a98c5c5f764478f28a28a1ac75c96015ff85c2a0a75b859da77df50fb" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.656257 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a34e071a98c5c5f764478f28a28a1ac75c96015ff85c2a0a75b859da77df50fb"} err="failed to get container status \"a34e071a98c5c5f764478f28a28a1ac75c96015ff85c2a0a75b859da77df50fb\": rpc error: code = NotFound desc = could not find container \"a34e071a98c5c5f764478f28a28a1ac75c96015ff85c2a0a75b859da77df50fb\": container with ID starting with a34e071a98c5c5f764478f28a28a1ac75c96015ff85c2a0a75b859da77df50fb not found: ID does not exist" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.656792 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:08:19 crc kubenswrapper[4827]: E0131 04:08:19.657293 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71e93017-ead3-407d-a5fe-e5459c46e6fb" containerName="nova-metadata-metadata" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.657399 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="71e93017-ead3-407d-a5fe-e5459c46e6fb" containerName="nova-metadata-metadata" Jan 31 04:08:19 crc kubenswrapper[4827]: E0131 04:08:19.657500 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71e93017-ead3-407d-a5fe-e5459c46e6fb" containerName="nova-metadata-log" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.657568 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="71e93017-ead3-407d-a5fe-e5459c46e6fb" containerName="nova-metadata-log" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.657778 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="71e93017-ead3-407d-a5fe-e5459c46e6fb" containerName="nova-metadata-log" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.657856 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="71e93017-ead3-407d-a5fe-e5459c46e6fb" containerName="nova-metadata-metadata" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.658816 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.661054 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.661238 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.664589 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.706533 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxzk4\" (UniqueName: \"kubernetes.io/projected/2bfcb9f2-5385-4257-8277-45f3c3af8582-kube-api-access-nxzk4\") pod \"nova-metadata-0\" (UID: \"2bfcb9f2-5385-4257-8277-45f3c3af8582\") " pod="openstack/nova-metadata-0" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.706832 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bfcb9f2-5385-4257-8277-45f3c3af8582-logs\") pod \"nova-metadata-0\" (UID: \"2bfcb9f2-5385-4257-8277-45f3c3af8582\") " pod="openstack/nova-metadata-0" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.706930 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bfcb9f2-5385-4257-8277-45f3c3af8582-config-data\") pod \"nova-metadata-0\" (UID: \"2bfcb9f2-5385-4257-8277-45f3c3af8582\") " pod="openstack/nova-metadata-0" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.707063 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bfcb9f2-5385-4257-8277-45f3c3af8582-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2bfcb9f2-5385-4257-8277-45f3c3af8582\") " pod="openstack/nova-metadata-0" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.707166 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bfcb9f2-5385-4257-8277-45f3c3af8582-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2bfcb9f2-5385-4257-8277-45f3c3af8582\") " pod="openstack/nova-metadata-0" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.809322 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bfcb9f2-5385-4257-8277-45f3c3af8582-logs\") pod \"nova-metadata-0\" (UID: \"2bfcb9f2-5385-4257-8277-45f3c3af8582\") " pod="openstack/nova-metadata-0" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.809372 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bfcb9f2-5385-4257-8277-45f3c3af8582-config-data\") pod \"nova-metadata-0\" (UID: \"2bfcb9f2-5385-4257-8277-45f3c3af8582\") " pod="openstack/nova-metadata-0" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.809460 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bfcb9f2-5385-4257-8277-45f3c3af8582-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2bfcb9f2-5385-4257-8277-45f3c3af8582\") " pod="openstack/nova-metadata-0" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.809512 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bfcb9f2-5385-4257-8277-45f3c3af8582-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2bfcb9f2-5385-4257-8277-45f3c3af8582\") " pod="openstack/nova-metadata-0" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.809568 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxzk4\" (UniqueName: \"kubernetes.io/projected/2bfcb9f2-5385-4257-8277-45f3c3af8582-kube-api-access-nxzk4\") pod \"nova-metadata-0\" (UID: \"2bfcb9f2-5385-4257-8277-45f3c3af8582\") " pod="openstack/nova-metadata-0" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.810442 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bfcb9f2-5385-4257-8277-45f3c3af8582-logs\") pod \"nova-metadata-0\" (UID: \"2bfcb9f2-5385-4257-8277-45f3c3af8582\") " pod="openstack/nova-metadata-0" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.813594 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bfcb9f2-5385-4257-8277-45f3c3af8582-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2bfcb9f2-5385-4257-8277-45f3c3af8582\") " pod="openstack/nova-metadata-0" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.814021 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bfcb9f2-5385-4257-8277-45f3c3af8582-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2bfcb9f2-5385-4257-8277-45f3c3af8582\") " pod="openstack/nova-metadata-0" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.814531 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bfcb9f2-5385-4257-8277-45f3c3af8582-config-data\") pod \"nova-metadata-0\" (UID: \"2bfcb9f2-5385-4257-8277-45f3c3af8582\") " pod="openstack/nova-metadata-0" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.829143 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxzk4\" (UniqueName: \"kubernetes.io/projected/2bfcb9f2-5385-4257-8277-45f3c3af8582-kube-api-access-nxzk4\") pod \"nova-metadata-0\" (UID: \"2bfcb9f2-5385-4257-8277-45f3c3af8582\") " pod="openstack/nova-metadata-0" Jan 31 04:08:19 crc kubenswrapper[4827]: I0131 04:08:19.978434 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:08:20 crc kubenswrapper[4827]: I0131 04:08:20.136500 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71e93017-ead3-407d-a5fe-e5459c46e6fb" path="/var/lib/kubelet/pods/71e93017-ead3-407d-a5fe-e5459c46e6fb/volumes" Jan 31 04:08:20 crc kubenswrapper[4827]: I0131 04:08:20.469600 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:08:20 crc kubenswrapper[4827]: W0131 04:08:20.496812 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bfcb9f2_5385_4257_8277_45f3c3af8582.slice/crio-08591f5ee1aec30644f0267c4c2cc1c04e56fb86766dd5637a6b08319322d1b1 WatchSource:0}: Error finding container 08591f5ee1aec30644f0267c4c2cc1c04e56fb86766dd5637a6b08319322d1b1: Status 404 returned error can't find the container with id 08591f5ee1aec30644f0267c4c2cc1c04e56fb86766dd5637a6b08319322d1b1 Jan 31 04:08:20 crc kubenswrapper[4827]: I0131 04:08:20.594393 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2bfcb9f2-5385-4257-8277-45f3c3af8582","Type":"ContainerStarted","Data":"08591f5ee1aec30644f0267c4c2cc1c04e56fb86766dd5637a6b08319322d1b1"} Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.417038 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.441137 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e012783-53bb-44bb-a946-c664a0db3587-config-data\") pod \"4e012783-53bb-44bb-a946-c664a0db3587\" (UID: \"4e012783-53bb-44bb-a946-c664a0db3587\") " Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.441565 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e012783-53bb-44bb-a946-c664a0db3587-combined-ca-bundle\") pod \"4e012783-53bb-44bb-a946-c664a0db3587\" (UID: \"4e012783-53bb-44bb-a946-c664a0db3587\") " Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.441756 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6ths\" (UniqueName: \"kubernetes.io/projected/4e012783-53bb-44bb-a946-c664a0db3587-kube-api-access-z6ths\") pod \"4e012783-53bb-44bb-a946-c664a0db3587\" (UID: \"4e012783-53bb-44bb-a946-c664a0db3587\") " Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.451317 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e012783-53bb-44bb-a946-c664a0db3587-kube-api-access-z6ths" (OuterVolumeSpecName: "kube-api-access-z6ths") pod "4e012783-53bb-44bb-a946-c664a0db3587" (UID: "4e012783-53bb-44bb-a946-c664a0db3587"). InnerVolumeSpecName "kube-api-access-z6ths". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.496028 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e012783-53bb-44bb-a946-c664a0db3587-config-data" (OuterVolumeSpecName: "config-data") pod "4e012783-53bb-44bb-a946-c664a0db3587" (UID: "4e012783-53bb-44bb-a946-c664a0db3587"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.511056 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e012783-53bb-44bb-a946-c664a0db3587-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e012783-53bb-44bb-a946-c664a0db3587" (UID: "4e012783-53bb-44bb-a946-c664a0db3587"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.544161 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e012783-53bb-44bb-a946-c664a0db3587-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.544196 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6ths\" (UniqueName: \"kubernetes.io/projected/4e012783-53bb-44bb-a946-c664a0db3587-kube-api-access-z6ths\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.544206 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e012783-53bb-44bb-a946-c664a0db3587-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.606813 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2bfcb9f2-5385-4257-8277-45f3c3af8582","Type":"ContainerStarted","Data":"03118f12ae8769c1f9d7fd134dc8b6625ce85220fc414814a6add1e5c97ba065"} Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.606849 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2bfcb9f2-5385-4257-8277-45f3c3af8582","Type":"ContainerStarted","Data":"9961a32dac5234ebce17cca7c37d4e272aae93c0dd792967352be4e490903653"} Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.609889 4827 generic.go:334] "Generic (PLEG): container finished" podID="4e012783-53bb-44bb-a946-c664a0db3587" containerID="bcca0921bbffa8547e9dc1ecbf7761aab6925b0c44e5fb8f8b45cfb5db033d4c" exitCode=0 Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.609933 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.609941 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e012783-53bb-44bb-a946-c664a0db3587","Type":"ContainerDied","Data":"bcca0921bbffa8547e9dc1ecbf7761aab6925b0c44e5fb8f8b45cfb5db033d4c"} Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.609993 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e012783-53bb-44bb-a946-c664a0db3587","Type":"ContainerDied","Data":"73c0f19f6e4d3b41a33508f7f819bb7efc5ec78fcad5d31e76ce968764ba05c5"} Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.610030 4827 scope.go:117] "RemoveContainer" containerID="bcca0921bbffa8547e9dc1ecbf7761aab6925b0c44e5fb8f8b45cfb5db033d4c" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.637864 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.637842688 podStartE2EDuration="2.637842688s" podCreationTimestamp="2026-01-31 04:08:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:08:21.633217735 +0000 UTC m=+1294.320298194" watchObservedRunningTime="2026-01-31 04:08:21.637842688 +0000 UTC m=+1294.324923147" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.638467 4827 scope.go:117] "RemoveContainer" containerID="bcca0921bbffa8547e9dc1ecbf7761aab6925b0c44e5fb8f8b45cfb5db033d4c" Jan 31 04:08:21 crc kubenswrapper[4827]: E0131 04:08:21.639299 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcca0921bbffa8547e9dc1ecbf7761aab6925b0c44e5fb8f8b45cfb5db033d4c\": container with ID starting with bcca0921bbffa8547e9dc1ecbf7761aab6925b0c44e5fb8f8b45cfb5db033d4c not found: ID does not exist" containerID="bcca0921bbffa8547e9dc1ecbf7761aab6925b0c44e5fb8f8b45cfb5db033d4c" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.639359 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcca0921bbffa8547e9dc1ecbf7761aab6925b0c44e5fb8f8b45cfb5db033d4c"} err="failed to get container status \"bcca0921bbffa8547e9dc1ecbf7761aab6925b0c44e5fb8f8b45cfb5db033d4c\": rpc error: code = NotFound desc = could not find container \"bcca0921bbffa8547e9dc1ecbf7761aab6925b0c44e5fb8f8b45cfb5db033d4c\": container with ID starting with bcca0921bbffa8547e9dc1ecbf7761aab6925b0c44e5fb8f8b45cfb5db033d4c not found: ID does not exist" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.656347 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.663694 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.684041 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:08:21 crc kubenswrapper[4827]: E0131 04:08:21.684377 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e012783-53bb-44bb-a946-c664a0db3587" containerName="nova-scheduler-scheduler" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.684392 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e012783-53bb-44bb-a946-c664a0db3587" containerName="nova-scheduler-scheduler" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.684563 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e012783-53bb-44bb-a946-c664a0db3587" containerName="nova-scheduler-scheduler" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.685231 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.689386 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.696841 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.750110 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1d91b03-3afc-4a12-a489-d1b97ec8d5fe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a1d91b03-3afc-4a12-a489-d1b97ec8d5fe\") " pod="openstack/nova-scheduler-0" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.750310 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1d91b03-3afc-4a12-a489-d1b97ec8d5fe-config-data\") pod \"nova-scheduler-0\" (UID: \"a1d91b03-3afc-4a12-a489-d1b97ec8d5fe\") " pod="openstack/nova-scheduler-0" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.750408 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5mx9\" (UniqueName: \"kubernetes.io/projected/a1d91b03-3afc-4a12-a489-d1b97ec8d5fe-kube-api-access-b5mx9\") pod \"nova-scheduler-0\" (UID: \"a1d91b03-3afc-4a12-a489-d1b97ec8d5fe\") " pod="openstack/nova-scheduler-0" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.852124 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1d91b03-3afc-4a12-a489-d1b97ec8d5fe-config-data\") pod \"nova-scheduler-0\" (UID: \"a1d91b03-3afc-4a12-a489-d1b97ec8d5fe\") " pod="openstack/nova-scheduler-0" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.852199 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mx9\" (UniqueName: \"kubernetes.io/projected/a1d91b03-3afc-4a12-a489-d1b97ec8d5fe-kube-api-access-b5mx9\") pod \"nova-scheduler-0\" (UID: \"a1d91b03-3afc-4a12-a489-d1b97ec8d5fe\") " pod="openstack/nova-scheduler-0" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.852226 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1d91b03-3afc-4a12-a489-d1b97ec8d5fe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a1d91b03-3afc-4a12-a489-d1b97ec8d5fe\") " pod="openstack/nova-scheduler-0" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.857266 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1d91b03-3afc-4a12-a489-d1b97ec8d5fe-config-data\") pod \"nova-scheduler-0\" (UID: \"a1d91b03-3afc-4a12-a489-d1b97ec8d5fe\") " pod="openstack/nova-scheduler-0" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.857489 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1d91b03-3afc-4a12-a489-d1b97ec8d5fe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a1d91b03-3afc-4a12-a489-d1b97ec8d5fe\") " pod="openstack/nova-scheduler-0" Jan 31 04:08:21 crc kubenswrapper[4827]: I0131 04:08:21.881481 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5mx9\" (UniqueName: \"kubernetes.io/projected/a1d91b03-3afc-4a12-a489-d1b97ec8d5fe-kube-api-access-b5mx9\") pod \"nova-scheduler-0\" (UID: \"a1d91b03-3afc-4a12-a489-d1b97ec8d5fe\") " pod="openstack/nova-scheduler-0" Jan 31 04:08:22 crc kubenswrapper[4827]: I0131 04:08:22.002399 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 04:08:22 crc kubenswrapper[4827]: I0131 04:08:22.132099 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e012783-53bb-44bb-a946-c664a0db3587" path="/var/lib/kubelet/pods/4e012783-53bb-44bb-a946-c664a0db3587/volumes" Jan 31 04:08:22 crc kubenswrapper[4827]: I0131 04:08:22.500072 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:08:22 crc kubenswrapper[4827]: I0131 04:08:22.621387 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a1d91b03-3afc-4a12-a489-d1b97ec8d5fe","Type":"ContainerStarted","Data":"b5648cd8fce5cfb9a27862386ac924f696351c494d29bdfc24478ae2a6f60fec"} Jan 31 04:08:23 crc kubenswrapper[4827]: I0131 04:08:23.632188 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a1d91b03-3afc-4a12-a489-d1b97ec8d5fe","Type":"ContainerStarted","Data":"77c8f5cc9e296a60ca59fa76aaca086f8b717e56886c036fa5b3d208d6f46c3a"} Jan 31 04:08:23 crc kubenswrapper[4827]: I0131 04:08:23.665412 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.665381725 podStartE2EDuration="2.665381725s" podCreationTimestamp="2026-01-31 04:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:08:23.652239831 +0000 UTC m=+1296.339320300" watchObservedRunningTime="2026-01-31 04:08:23.665381725 +0000 UTC m=+1296.352462194" Jan 31 04:08:24 crc kubenswrapper[4827]: I0131 04:08:24.978965 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 04:08:24 crc kubenswrapper[4827]: I0131 04:08:24.979204 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 04:08:27 crc kubenswrapper[4827]: I0131 04:08:27.003454 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 04:08:27 crc kubenswrapper[4827]: I0131 04:08:27.947162 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 04:08:27 crc kubenswrapper[4827]: I0131 04:08:27.947252 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 04:08:28 crc kubenswrapper[4827]: I0131 04:08:28.961052 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="461e6e42-8412-4ab9-aa9a-02b27965961d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.193:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:28 crc kubenswrapper[4827]: I0131 04:08:28.961690 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="461e6e42-8412-4ab9-aa9a-02b27965961d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.193:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:29 crc kubenswrapper[4827]: I0131 04:08:29.979820 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 04:08:29 crc kubenswrapper[4827]: I0131 04:08:29.979938 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 04:08:30 crc kubenswrapper[4827]: I0131 04:08:30.996180 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2bfcb9f2-5385-4257-8277-45f3c3af8582" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:30 crc kubenswrapper[4827]: I0131 04:08:30.996196 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2bfcb9f2-5385-4257-8277-45f3c3af8582" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:32 crc kubenswrapper[4827]: I0131 04:08:32.002949 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 04:08:32 crc kubenswrapper[4827]: I0131 04:08:32.035019 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 04:08:32 crc kubenswrapper[4827]: I0131 04:08:32.781399 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 04:08:37 crc kubenswrapper[4827]: I0131 04:08:37.956481 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 04:08:37 crc kubenswrapper[4827]: I0131 04:08:37.958160 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 04:08:37 crc kubenswrapper[4827]: I0131 04:08:37.966345 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 04:08:37 crc kubenswrapper[4827]: I0131 04:08:37.969851 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 04:08:38 crc kubenswrapper[4827]: I0131 04:08:38.817931 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 04:08:38 crc kubenswrapper[4827]: I0131 04:08:38.825909 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 04:08:38 crc kubenswrapper[4827]: I0131 04:08:38.891248 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 04:08:39 crc kubenswrapper[4827]: I0131 04:08:39.989386 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 04:08:39 crc kubenswrapper[4827]: I0131 04:08:39.989827 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 04:08:39 crc kubenswrapper[4827]: I0131 04:08:39.999673 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 04:08:40 crc kubenswrapper[4827]: I0131 04:08:40.004495 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 04:08:47 crc kubenswrapper[4827]: I0131 04:08:47.308752 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 04:08:47 crc kubenswrapper[4827]: I0131 04:08:47.371043 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:08:47 crc kubenswrapper[4827]: I0131 04:08:47.371107 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:08:48 crc kubenswrapper[4827]: I0131 04:08:48.461704 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 04:08:51 crc kubenswrapper[4827]: I0131 04:08:51.389856 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="237362ad-03ab-48a0-916d-1b140b4727d5" containerName="rabbitmq" containerID="cri-o://3bef3482c6b5a8881586f34defacaf25b3b594be37c355711b648b85efc6694b" gracePeriod=604796 Jan 31 04:08:52 crc kubenswrapper[4827]: I0131 04:08:52.875715 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="02f954c7-6442-4974-827a-aef4a5690e8c" containerName="rabbitmq" containerID="cri-o://2fcae6a3381f9d20d9bab34e4ab0e634c6a6c572a086dad05b16e7438d43479a" gracePeriod=604796 Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.021849 4827 generic.go:334] "Generic (PLEG): container finished" podID="237362ad-03ab-48a0-916d-1b140b4727d5" containerID="3bef3482c6b5a8881586f34defacaf25b3b594be37c355711b648b85efc6694b" exitCode=0 Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.021963 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"237362ad-03ab-48a0-916d-1b140b4727d5","Type":"ContainerDied","Data":"3bef3482c6b5a8881586f34defacaf25b3b594be37c355711b648b85efc6694b"} Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.022368 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"237362ad-03ab-48a0-916d-1b140b4727d5","Type":"ContainerDied","Data":"6f91a72be3f7fbaf26c7ea0f9d9478c31e909b7fb766cad9cc4b9943489f700e"} Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.022399 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f91a72be3f7fbaf26c7ea0f9d9478c31e909b7fb766cad9cc4b9943489f700e" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.055670 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.105285 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/237362ad-03ab-48a0-916d-1b140b4727d5-server-conf\") pod \"237362ad-03ab-48a0-916d-1b140b4727d5\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.105345 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-erlang-cookie\") pod \"237362ad-03ab-48a0-916d-1b140b4727d5\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.105407 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-tls\") pod \"237362ad-03ab-48a0-916d-1b140b4727d5\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.105512 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"237362ad-03ab-48a0-916d-1b140b4727d5\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.105602 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/237362ad-03ab-48a0-916d-1b140b4727d5-erlang-cookie-secret\") pod \"237362ad-03ab-48a0-916d-1b140b4727d5\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.105770 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9fd8\" (UniqueName: \"kubernetes.io/projected/237362ad-03ab-48a0-916d-1b140b4727d5-kube-api-access-h9fd8\") pod \"237362ad-03ab-48a0-916d-1b140b4727d5\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.106351 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "237362ad-03ab-48a0-916d-1b140b4727d5" (UID: "237362ad-03ab-48a0-916d-1b140b4727d5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.106679 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-plugins\") pod \"237362ad-03ab-48a0-916d-1b140b4727d5\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.106722 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/237362ad-03ab-48a0-916d-1b140b4727d5-config-data\") pod \"237362ad-03ab-48a0-916d-1b140b4727d5\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.106742 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/237362ad-03ab-48a0-916d-1b140b4727d5-pod-info\") pod \"237362ad-03ab-48a0-916d-1b140b4727d5\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.106772 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-confd\") pod \"237362ad-03ab-48a0-916d-1b140b4727d5\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.106801 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/237362ad-03ab-48a0-916d-1b140b4727d5-plugins-conf\") pod \"237362ad-03ab-48a0-916d-1b140b4727d5\" (UID: \"237362ad-03ab-48a0-916d-1b140b4727d5\") " Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.109037 4827 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.110004 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "237362ad-03ab-48a0-916d-1b140b4727d5" (UID: "237362ad-03ab-48a0-916d-1b140b4727d5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.111504 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/237362ad-03ab-48a0-916d-1b140b4727d5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "237362ad-03ab-48a0-916d-1b140b4727d5" (UID: "237362ad-03ab-48a0-916d-1b140b4727d5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.112052 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/237362ad-03ab-48a0-916d-1b140b4727d5-kube-api-access-h9fd8" (OuterVolumeSpecName: "kube-api-access-h9fd8") pod "237362ad-03ab-48a0-916d-1b140b4727d5" (UID: "237362ad-03ab-48a0-916d-1b140b4727d5"). InnerVolumeSpecName "kube-api-access-h9fd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.116072 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "237362ad-03ab-48a0-916d-1b140b4727d5" (UID: "237362ad-03ab-48a0-916d-1b140b4727d5"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.117060 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "237362ad-03ab-48a0-916d-1b140b4727d5" (UID: "237362ad-03ab-48a0-916d-1b140b4727d5"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.117086 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/237362ad-03ab-48a0-916d-1b140b4727d5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "237362ad-03ab-48a0-916d-1b140b4727d5" (UID: "237362ad-03ab-48a0-916d-1b140b4727d5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.150535 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/237362ad-03ab-48a0-916d-1b140b4727d5-pod-info" (OuterVolumeSpecName: "pod-info") pod "237362ad-03ab-48a0-916d-1b140b4727d5" (UID: "237362ad-03ab-48a0-916d-1b140b4727d5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.179124 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/237362ad-03ab-48a0-916d-1b140b4727d5-config-data" (OuterVolumeSpecName: "config-data") pod "237362ad-03ab-48a0-916d-1b140b4727d5" (UID: "237362ad-03ab-48a0-916d-1b140b4727d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.196070 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/237362ad-03ab-48a0-916d-1b140b4727d5-server-conf" (OuterVolumeSpecName: "server-conf") pod "237362ad-03ab-48a0-916d-1b140b4727d5" (UID: "237362ad-03ab-48a0-916d-1b140b4727d5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.211238 4827 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/237362ad-03ab-48a0-916d-1b140b4727d5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.211268 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9fd8\" (UniqueName: \"kubernetes.io/projected/237362ad-03ab-48a0-916d-1b140b4727d5-kube-api-access-h9fd8\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.211279 4827 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.211288 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/237362ad-03ab-48a0-916d-1b140b4727d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.211296 4827 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/237362ad-03ab-48a0-916d-1b140b4727d5-pod-info\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.211305 4827 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/237362ad-03ab-48a0-916d-1b140b4727d5-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.211314 4827 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/237362ad-03ab-48a0-916d-1b140b4727d5-server-conf\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.211323 4827 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.211345 4827 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.236600 4827 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.255758 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "237362ad-03ab-48a0-916d-1b140b4727d5" (UID: "237362ad-03ab-48a0-916d-1b140b4727d5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.312672 4827 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:58 crc kubenswrapper[4827]: I0131 04:08:58.312698 4827 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/237362ad-03ab-48a0-916d-1b140b4727d5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.029308 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.100026 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.109157 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.134370 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 04:08:59 crc kubenswrapper[4827]: E0131 04:08:59.134759 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237362ad-03ab-48a0-916d-1b140b4727d5" containerName="rabbitmq" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.134778 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="237362ad-03ab-48a0-916d-1b140b4727d5" containerName="rabbitmq" Jan 31 04:08:59 crc kubenswrapper[4827]: E0131 04:08:59.135676 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237362ad-03ab-48a0-916d-1b140b4727d5" containerName="setup-container" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.135691 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="237362ad-03ab-48a0-916d-1b140b4727d5" containerName="setup-container" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.135865 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="237362ad-03ab-48a0-916d-1b140b4727d5" containerName="rabbitmq" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.136737 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.140264 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.140343 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.141070 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.141308 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-l95g2" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.141760 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.142021 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.142085 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.156087 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.327901 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bznd2\" (UniqueName: \"kubernetes.io/projected/bd61984d-518c-44f2-8a18-8bda81bb6af3-kube-api-access-bznd2\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.327951 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bd61984d-518c-44f2-8a18-8bda81bb6af3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.327978 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bd61984d-518c-44f2-8a18-8bda81bb6af3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.328005 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bd61984d-518c-44f2-8a18-8bda81bb6af3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.328110 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd61984d-518c-44f2-8a18-8bda81bb6af3-config-data\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.328229 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bd61984d-518c-44f2-8a18-8bda81bb6af3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.328277 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bd61984d-518c-44f2-8a18-8bda81bb6af3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.328366 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.328553 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bd61984d-518c-44f2-8a18-8bda81bb6af3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.328598 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bd61984d-518c-44f2-8a18-8bda81bb6af3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.328632 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bd61984d-518c-44f2-8a18-8bda81bb6af3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.429836 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bd61984d-518c-44f2-8a18-8bda81bb6af3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.429906 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bd61984d-518c-44f2-8a18-8bda81bb6af3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.429925 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bd61984d-518c-44f2-8a18-8bda81bb6af3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.429967 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bznd2\" (UniqueName: \"kubernetes.io/projected/bd61984d-518c-44f2-8a18-8bda81bb6af3-kube-api-access-bznd2\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.429991 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bd61984d-518c-44f2-8a18-8bda81bb6af3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.430016 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bd61984d-518c-44f2-8a18-8bda81bb6af3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.430041 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bd61984d-518c-44f2-8a18-8bda81bb6af3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.430061 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd61984d-518c-44f2-8a18-8bda81bb6af3-config-data\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.430096 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bd61984d-518c-44f2-8a18-8bda81bb6af3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.430120 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bd61984d-518c-44f2-8a18-8bda81bb6af3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.430152 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.430351 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.430776 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bd61984d-518c-44f2-8a18-8bda81bb6af3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.433748 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bd61984d-518c-44f2-8a18-8bda81bb6af3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.434626 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bd61984d-518c-44f2-8a18-8bda81bb6af3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.436217 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd61984d-518c-44f2-8a18-8bda81bb6af3-config-data\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.437453 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bd61984d-518c-44f2-8a18-8bda81bb6af3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.437556 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bd61984d-518c-44f2-8a18-8bda81bb6af3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.439155 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bd61984d-518c-44f2-8a18-8bda81bb6af3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.449648 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bd61984d-518c-44f2-8a18-8bda81bb6af3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.450738 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bd61984d-518c-44f2-8a18-8bda81bb6af3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.452820 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bznd2\" (UniqueName: \"kubernetes.io/projected/bd61984d-518c-44f2-8a18-8bda81bb6af3-kube-api-access-bznd2\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.477254 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"bd61984d-518c-44f2-8a18-8bda81bb6af3\") " pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.503361 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.632349 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02f954c7-6442-4974-827a-aef4a5690e8c-pod-info\") pod \"02f954c7-6442-4974-827a-aef4a5690e8c\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.632453 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02f954c7-6442-4974-827a-aef4a5690e8c-server-conf\") pod \"02f954c7-6442-4974-827a-aef4a5690e8c\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.632517 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02f954c7-6442-4974-827a-aef4a5690e8c-plugins-conf\") pod \"02f954c7-6442-4974-827a-aef4a5690e8c\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.633128 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02f954c7-6442-4974-827a-aef4a5690e8c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "02f954c7-6442-4974-827a-aef4a5690e8c" (UID: "02f954c7-6442-4974-827a-aef4a5690e8c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.633221 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02f954c7-6442-4974-827a-aef4a5690e8c-erlang-cookie-secret\") pod \"02f954c7-6442-4974-827a-aef4a5690e8c\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.633586 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-confd\") pod \"02f954c7-6442-4974-827a-aef4a5690e8c\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.633666 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-tls\") pod \"02f954c7-6442-4974-827a-aef4a5690e8c\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.633702 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-plugins\") pod \"02f954c7-6442-4974-827a-aef4a5690e8c\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.633760 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02f954c7-6442-4974-827a-aef4a5690e8c-config-data\") pod \"02f954c7-6442-4974-827a-aef4a5690e8c\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.633786 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-erlang-cookie\") pod \"02f954c7-6442-4974-827a-aef4a5690e8c\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.633804 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"02f954c7-6442-4974-827a-aef4a5690e8c\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.633845 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpjh7\" (UniqueName: \"kubernetes.io/projected/02f954c7-6442-4974-827a-aef4a5690e8c-kube-api-access-vpjh7\") pod \"02f954c7-6442-4974-827a-aef4a5690e8c\" (UID: \"02f954c7-6442-4974-827a-aef4a5690e8c\") " Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.634379 4827 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02f954c7-6442-4974-827a-aef4a5690e8c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.635247 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "02f954c7-6442-4974-827a-aef4a5690e8c" (UID: "02f954c7-6442-4974-827a-aef4a5690e8c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.636757 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "02f954c7-6442-4974-827a-aef4a5690e8c" (UID: "02f954c7-6442-4974-827a-aef4a5690e8c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.636797 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/02f954c7-6442-4974-827a-aef4a5690e8c-pod-info" (OuterVolumeSpecName: "pod-info") pod "02f954c7-6442-4974-827a-aef4a5690e8c" (UID: "02f954c7-6442-4974-827a-aef4a5690e8c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.637817 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f954c7-6442-4974-827a-aef4a5690e8c-kube-api-access-vpjh7" (OuterVolumeSpecName: "kube-api-access-vpjh7") pod "02f954c7-6442-4974-827a-aef4a5690e8c" (UID: "02f954c7-6442-4974-827a-aef4a5690e8c"). InnerVolumeSpecName "kube-api-access-vpjh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.638125 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f954c7-6442-4974-827a-aef4a5690e8c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "02f954c7-6442-4974-827a-aef4a5690e8c" (UID: "02f954c7-6442-4974-827a-aef4a5690e8c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.638749 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "02f954c7-6442-4974-827a-aef4a5690e8c" (UID: "02f954c7-6442-4974-827a-aef4a5690e8c"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.639755 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "02f954c7-6442-4974-827a-aef4a5690e8c" (UID: "02f954c7-6442-4974-827a-aef4a5690e8c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.659618 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02f954c7-6442-4974-827a-aef4a5690e8c-config-data" (OuterVolumeSpecName: "config-data") pod "02f954c7-6442-4974-827a-aef4a5690e8c" (UID: "02f954c7-6442-4974-827a-aef4a5690e8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.684430 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02f954c7-6442-4974-827a-aef4a5690e8c-server-conf" (OuterVolumeSpecName: "server-conf") pod "02f954c7-6442-4974-827a-aef4a5690e8c" (UID: "02f954c7-6442-4974-827a-aef4a5690e8c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.749185 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "02f954c7-6442-4974-827a-aef4a5690e8c" (UID: "02f954c7-6442-4974-827a-aef4a5690e8c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.750730 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpjh7\" (UniqueName: \"kubernetes.io/projected/02f954c7-6442-4974-827a-aef4a5690e8c-kube-api-access-vpjh7\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.750759 4827 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02f954c7-6442-4974-827a-aef4a5690e8c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.750772 4827 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02f954c7-6442-4974-827a-aef4a5690e8c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.750785 4827 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02f954c7-6442-4974-827a-aef4a5690e8c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.750797 4827 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.750808 4827 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.750819 4827 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.750834 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02f954c7-6442-4974-827a-aef4a5690e8c-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.750845 4827 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02f954c7-6442-4974-827a-aef4a5690e8c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.750940 4827 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.771324 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.775078 4827 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 31 04:08:59 crc kubenswrapper[4827]: I0131 04:08:59.852472 4827 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.048937 4827 generic.go:334] "Generic (PLEG): container finished" podID="02f954c7-6442-4974-827a-aef4a5690e8c" containerID="2fcae6a3381f9d20d9bab34e4ab0e634c6a6c572a086dad05b16e7438d43479a" exitCode=0 Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.049015 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"02f954c7-6442-4974-827a-aef4a5690e8c","Type":"ContainerDied","Data":"2fcae6a3381f9d20d9bab34e4ab0e634c6a6c572a086dad05b16e7438d43479a"} Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.049397 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"02f954c7-6442-4974-827a-aef4a5690e8c","Type":"ContainerDied","Data":"a27ba579b7604f3893f963d970d878261c506965e45bbd75c68c7adfd36c31f0"} Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.049448 4827 scope.go:117] "RemoveContainer" containerID="2fcae6a3381f9d20d9bab34e4ab0e634c6a6c572a086dad05b16e7438d43479a" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.049045 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.105727 4827 scope.go:117] "RemoveContainer" containerID="a3c94b6b53a56855eba7c9871d6fbc9d197f581c259d27c512976ebe225e2f9f" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.106214 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.126372 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="237362ad-03ab-48a0-916d-1b140b4727d5" path="/var/lib/kubelet/pods/237362ad-03ab-48a0-916d-1b140b4727d5/volumes" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.132675 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.140247 4827 scope.go:117] "RemoveContainer" containerID="2fcae6a3381f9d20d9bab34e4ab0e634c6a6c572a086dad05b16e7438d43479a" Jan 31 04:09:00 crc kubenswrapper[4827]: E0131 04:09:00.140639 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fcae6a3381f9d20d9bab34e4ab0e634c6a6c572a086dad05b16e7438d43479a\": container with ID starting with 2fcae6a3381f9d20d9bab34e4ab0e634c6a6c572a086dad05b16e7438d43479a not found: ID does not exist" containerID="2fcae6a3381f9d20d9bab34e4ab0e634c6a6c572a086dad05b16e7438d43479a" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.140676 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fcae6a3381f9d20d9bab34e4ab0e634c6a6c572a086dad05b16e7438d43479a"} err="failed to get container status \"2fcae6a3381f9d20d9bab34e4ab0e634c6a6c572a086dad05b16e7438d43479a\": rpc error: code = NotFound desc = could not find container \"2fcae6a3381f9d20d9bab34e4ab0e634c6a6c572a086dad05b16e7438d43479a\": container with ID starting with 2fcae6a3381f9d20d9bab34e4ab0e634c6a6c572a086dad05b16e7438d43479a not found: ID does not exist" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.140702 4827 scope.go:117] "RemoveContainer" containerID="a3c94b6b53a56855eba7c9871d6fbc9d197f581c259d27c512976ebe225e2f9f" Jan 31 04:09:00 crc kubenswrapper[4827]: E0131 04:09:00.141015 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c94b6b53a56855eba7c9871d6fbc9d197f581c259d27c512976ebe225e2f9f\": container with ID starting with a3c94b6b53a56855eba7c9871d6fbc9d197f581c259d27c512976ebe225e2f9f not found: ID does not exist" containerID="a3c94b6b53a56855eba7c9871d6fbc9d197f581c259d27c512976ebe225e2f9f" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.141043 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c94b6b53a56855eba7c9871d6fbc9d197f581c259d27c512976ebe225e2f9f"} err="failed to get container status \"a3c94b6b53a56855eba7c9871d6fbc9d197f581c259d27c512976ebe225e2f9f\": rpc error: code = NotFound desc = could not find container \"a3c94b6b53a56855eba7c9871d6fbc9d197f581c259d27c512976ebe225e2f9f\": container with ID starting with a3c94b6b53a56855eba7c9871d6fbc9d197f581c259d27c512976ebe225e2f9f not found: ID does not exist" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.143302 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 04:09:00 crc kubenswrapper[4827]: E0131 04:09:00.143697 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f954c7-6442-4974-827a-aef4a5690e8c" containerName="setup-container" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.143718 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f954c7-6442-4974-827a-aef4a5690e8c" containerName="setup-container" Jan 31 04:09:00 crc kubenswrapper[4827]: E0131 04:09:00.143732 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f954c7-6442-4974-827a-aef4a5690e8c" containerName="rabbitmq" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.143740 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f954c7-6442-4974-827a-aef4a5690e8c" containerName="rabbitmq" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.143963 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f954c7-6442-4974-827a-aef4a5690e8c" containerName="rabbitmq" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.145007 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.147654 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.147913 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-thvnx" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.148032 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.148165 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.148346 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.148495 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.148778 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.151138 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.276014 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92323497-4fa1-43f6-98b0-08fa31c47d3a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.276063 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/92323497-4fa1-43f6-98b0-08fa31c47d3a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.276270 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w72ct\" (UniqueName: \"kubernetes.io/projected/92323497-4fa1-43f6-98b0-08fa31c47d3a-kube-api-access-w72ct\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.276530 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/92323497-4fa1-43f6-98b0-08fa31c47d3a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.276611 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.276756 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/92323497-4fa1-43f6-98b0-08fa31c47d3a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.276800 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/92323497-4fa1-43f6-98b0-08fa31c47d3a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.276822 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/92323497-4fa1-43f6-98b0-08fa31c47d3a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.276851 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/92323497-4fa1-43f6-98b0-08fa31c47d3a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.276899 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/92323497-4fa1-43f6-98b0-08fa31c47d3a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.276944 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/92323497-4fa1-43f6-98b0-08fa31c47d3a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.353788 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.378267 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/92323497-4fa1-43f6-98b0-08fa31c47d3a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.378586 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/92323497-4fa1-43f6-98b0-08fa31c47d3a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.378638 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/92323497-4fa1-43f6-98b0-08fa31c47d3a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.378665 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/92323497-4fa1-43f6-98b0-08fa31c47d3a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.378693 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/92323497-4fa1-43f6-98b0-08fa31c47d3a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.378807 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92323497-4fa1-43f6-98b0-08fa31c47d3a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.378834 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/92323497-4fa1-43f6-98b0-08fa31c47d3a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.378962 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w72ct\" (UniqueName: \"kubernetes.io/projected/92323497-4fa1-43f6-98b0-08fa31c47d3a-kube-api-access-w72ct\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.379002 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/92323497-4fa1-43f6-98b0-08fa31c47d3a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.379079 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/92323497-4fa1-43f6-98b0-08fa31c47d3a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.380056 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/92323497-4fa1-43f6-98b0-08fa31c47d3a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.380209 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/92323497-4fa1-43f6-98b0-08fa31c47d3a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.380238 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.380276 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/92323497-4fa1-43f6-98b0-08fa31c47d3a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.380610 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92323497-4fa1-43f6-98b0-08fa31c47d3a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.381085 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.381373 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/92323497-4fa1-43f6-98b0-08fa31c47d3a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.381824 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/92323497-4fa1-43f6-98b0-08fa31c47d3a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.383764 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/92323497-4fa1-43f6-98b0-08fa31c47d3a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.383872 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/92323497-4fa1-43f6-98b0-08fa31c47d3a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.384456 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/92323497-4fa1-43f6-98b0-08fa31c47d3a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.402824 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w72ct\" (UniqueName: \"kubernetes.io/projected/92323497-4fa1-43f6-98b0-08fa31c47d3a-kube-api-access-w72ct\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.417812 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"92323497-4fa1-43f6-98b0-08fa31c47d3a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.464017 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.646030 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-x7j6w"] Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.647585 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.651000 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.665625 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-x7j6w"] Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.685897 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-x7j6w\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.686237 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-x7j6w\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.686269 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-config\") pod \"dnsmasq-dns-6447ccbd8f-x7j6w\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.686293 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-x7j6w\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.686315 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-x7j6w\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.686339 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlr5c\" (UniqueName: \"kubernetes.io/projected/abeb8ccf-500c-4d34-80c6-6bb9a689447a-kube-api-access-xlr5c\") pod \"dnsmasq-dns-6447ccbd8f-x7j6w\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.712349 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 04:09:00 crc kubenswrapper[4827]: W0131 04:09:00.719047 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92323497_4fa1_43f6_98b0_08fa31c47d3a.slice/crio-9d4c620dd6ac2b1a7208c93036071cc8dfa0f035af1bf7c220f1c90b4b65adb3 WatchSource:0}: Error finding container 9d4c620dd6ac2b1a7208c93036071cc8dfa0f035af1bf7c220f1c90b4b65adb3: Status 404 returned error can't find the container with id 9d4c620dd6ac2b1a7208c93036071cc8dfa0f035af1bf7c220f1c90b4b65adb3 Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.787676 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-x7j6w\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.787782 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-config\") pod \"dnsmasq-dns-6447ccbd8f-x7j6w\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.787808 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-x7j6w\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.787830 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-x7j6w\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.787858 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlr5c\" (UniqueName: \"kubernetes.io/projected/abeb8ccf-500c-4d34-80c6-6bb9a689447a-kube-api-access-xlr5c\") pod \"dnsmasq-dns-6447ccbd8f-x7j6w\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.787953 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-x7j6w\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.788558 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-x7j6w\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.788619 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-x7j6w\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.788869 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-config\") pod \"dnsmasq-dns-6447ccbd8f-x7j6w\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.788901 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-x7j6w\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.789174 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-x7j6w\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:00 crc kubenswrapper[4827]: I0131 04:09:00.803762 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlr5c\" (UniqueName: \"kubernetes.io/projected/abeb8ccf-500c-4d34-80c6-6bb9a689447a-kube-api-access-xlr5c\") pod \"dnsmasq-dns-6447ccbd8f-x7j6w\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:01 crc kubenswrapper[4827]: I0131 04:09:01.009302 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:01 crc kubenswrapper[4827]: I0131 04:09:01.057516 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bd61984d-518c-44f2-8a18-8bda81bb6af3","Type":"ContainerStarted","Data":"af157b1cbc5c7df59b7eb8b600874f4ba6b6eda7cfdcaf27e670035bc3028799"} Jan 31 04:09:01 crc kubenswrapper[4827]: I0131 04:09:01.061349 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"92323497-4fa1-43f6-98b0-08fa31c47d3a","Type":"ContainerStarted","Data":"9d4c620dd6ac2b1a7208c93036071cc8dfa0f035af1bf7c220f1c90b4b65adb3"} Jan 31 04:09:01 crc kubenswrapper[4827]: I0131 04:09:01.264439 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-x7j6w"] Jan 31 04:09:01 crc kubenswrapper[4827]: W0131 04:09:01.391069 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabeb8ccf_500c_4d34_80c6_6bb9a689447a.slice/crio-ea90f732d80c2bc5405028fd74b2815424198438116399a772e4f4b26b787ee5 WatchSource:0}: Error finding container ea90f732d80c2bc5405028fd74b2815424198438116399a772e4f4b26b787ee5: Status 404 returned error can't find the container with id ea90f732d80c2bc5405028fd74b2815424198438116399a772e4f4b26b787ee5 Jan 31 04:09:02 crc kubenswrapper[4827]: I0131 04:09:02.073552 4827 generic.go:334] "Generic (PLEG): container finished" podID="abeb8ccf-500c-4d34-80c6-6bb9a689447a" containerID="9c335a79a6b44321cc742f85060f6ecdf52115e67df5a5bf7f5acfe2b0b02bc8" exitCode=0 Jan 31 04:09:02 crc kubenswrapper[4827]: I0131 04:09:02.073610 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" event={"ID":"abeb8ccf-500c-4d34-80c6-6bb9a689447a","Type":"ContainerDied","Data":"9c335a79a6b44321cc742f85060f6ecdf52115e67df5a5bf7f5acfe2b0b02bc8"} Jan 31 04:09:02 crc kubenswrapper[4827]: I0131 04:09:02.074057 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" event={"ID":"abeb8ccf-500c-4d34-80c6-6bb9a689447a","Type":"ContainerStarted","Data":"ea90f732d80c2bc5405028fd74b2815424198438116399a772e4f4b26b787ee5"} Jan 31 04:09:02 crc kubenswrapper[4827]: I0131 04:09:02.077487 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bd61984d-518c-44f2-8a18-8bda81bb6af3","Type":"ContainerStarted","Data":"aec4e3d5ec5b9f0eccef39565fa957e9d6f665d7fe2d92f04d29b6a8ee28942f"} Jan 31 04:09:02 crc kubenswrapper[4827]: I0131 04:09:02.143373 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02f954c7-6442-4974-827a-aef4a5690e8c" path="/var/lib/kubelet/pods/02f954c7-6442-4974-827a-aef4a5690e8c/volumes" Jan 31 04:09:03 crc kubenswrapper[4827]: I0131 04:09:03.086453 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"92323497-4fa1-43f6-98b0-08fa31c47d3a","Type":"ContainerStarted","Data":"3569e7d736e69075c54a333c464402a1810657893dedfa13dcc620b6b03a5a71"} Jan 31 04:09:03 crc kubenswrapper[4827]: I0131 04:09:03.090028 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" event={"ID":"abeb8ccf-500c-4d34-80c6-6bb9a689447a","Type":"ContainerStarted","Data":"2d79a917801a8cc76e739ce82775bf1c96e26add575049ecba89981db8a576df"} Jan 31 04:09:03 crc kubenswrapper[4827]: I0131 04:09:03.090059 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:03 crc kubenswrapper[4827]: I0131 04:09:03.130432 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" podStartSLOduration=3.13040764 podStartE2EDuration="3.13040764s" podCreationTimestamp="2026-01-31 04:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:09:03.128521201 +0000 UTC m=+1335.815601650" watchObservedRunningTime="2026-01-31 04:09:03.13040764 +0000 UTC m=+1335.817488089" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.012092 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.107206 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-h8gmp"] Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.107804 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" podUID="ab1251de-0ef5-48f1-b9db-0a68965651cd" containerName="dnsmasq-dns" containerID="cri-o://80344761c75942bb693cd7fd41e8e9a1493d06e3e5e228e09bfa552823754ddd" gracePeriod=10 Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.281645 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-pvgtd"] Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.293101 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-pvgtd"] Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.293267 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.430113 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxk7q\" (UniqueName: \"kubernetes.io/projected/70e687cb-d396-46d3-890a-cd3cbe51186f-kube-api-access-bxk7q\") pod \"dnsmasq-dns-864d5fc68c-pvgtd\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.430368 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-config\") pod \"dnsmasq-dns-864d5fc68c-pvgtd\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.430600 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-pvgtd\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.430724 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-pvgtd\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.430757 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-pvgtd\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.430959 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-pvgtd\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.535156 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-pvgtd\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.535489 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-pvgtd\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.535512 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-pvgtd\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.535556 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-pvgtd\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.535577 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxk7q\" (UniqueName: \"kubernetes.io/projected/70e687cb-d396-46d3-890a-cd3cbe51186f-kube-api-access-bxk7q\") pod \"dnsmasq-dns-864d5fc68c-pvgtd\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.535627 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-config\") pod \"dnsmasq-dns-864d5fc68c-pvgtd\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.538339 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-config\") pod \"dnsmasq-dns-864d5fc68c-pvgtd\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.538556 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-pvgtd\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.539336 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-pvgtd\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.539612 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-pvgtd\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.540418 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-pvgtd\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.560350 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxk7q\" (UniqueName: \"kubernetes.io/projected/70e687cb-d396-46d3-890a-cd3cbe51186f-kube-api-access-bxk7q\") pod \"dnsmasq-dns-864d5fc68c-pvgtd\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.615414 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.681501 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.840944 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-dns-svc\") pod \"ab1251de-0ef5-48f1-b9db-0a68965651cd\" (UID: \"ab1251de-0ef5-48f1-b9db-0a68965651cd\") " Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.840994 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9smz\" (UniqueName: \"kubernetes.io/projected/ab1251de-0ef5-48f1-b9db-0a68965651cd-kube-api-access-b9smz\") pod \"ab1251de-0ef5-48f1-b9db-0a68965651cd\" (UID: \"ab1251de-0ef5-48f1-b9db-0a68965651cd\") " Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.841035 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-ovsdbserver-sb\") pod \"ab1251de-0ef5-48f1-b9db-0a68965651cd\" (UID: \"ab1251de-0ef5-48f1-b9db-0a68965651cd\") " Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.841115 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-config\") pod \"ab1251de-0ef5-48f1-b9db-0a68965651cd\" (UID: \"ab1251de-0ef5-48f1-b9db-0a68965651cd\") " Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.841191 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-ovsdbserver-nb\") pod \"ab1251de-0ef5-48f1-b9db-0a68965651cd\" (UID: \"ab1251de-0ef5-48f1-b9db-0a68965651cd\") " Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.846030 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab1251de-0ef5-48f1-b9db-0a68965651cd-kube-api-access-b9smz" (OuterVolumeSpecName: "kube-api-access-b9smz") pod "ab1251de-0ef5-48f1-b9db-0a68965651cd" (UID: "ab1251de-0ef5-48f1-b9db-0a68965651cd"). InnerVolumeSpecName "kube-api-access-b9smz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.884614 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ab1251de-0ef5-48f1-b9db-0a68965651cd" (UID: "ab1251de-0ef5-48f1-b9db-0a68965651cd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.884643 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ab1251de-0ef5-48f1-b9db-0a68965651cd" (UID: "ab1251de-0ef5-48f1-b9db-0a68965651cd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.887117 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab1251de-0ef5-48f1-b9db-0a68965651cd" (UID: "ab1251de-0ef5-48f1-b9db-0a68965651cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.887450 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-config" (OuterVolumeSpecName: "config") pod "ab1251de-0ef5-48f1-b9db-0a68965651cd" (UID: "ab1251de-0ef5-48f1-b9db-0a68965651cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.943231 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.943265 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.943276 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9smz\" (UniqueName: \"kubernetes.io/projected/ab1251de-0ef5-48f1-b9db-0a68965651cd-kube-api-access-b9smz\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.943285 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:11 crc kubenswrapper[4827]: I0131 04:09:11.943300 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab1251de-0ef5-48f1-b9db-0a68965651cd-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:12 crc kubenswrapper[4827]: I0131 04:09:12.084914 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-pvgtd"] Jan 31 04:09:12 crc kubenswrapper[4827]: I0131 04:09:12.174415 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" event={"ID":"70e687cb-d396-46d3-890a-cd3cbe51186f","Type":"ContainerStarted","Data":"43f357d4074c7a9f64a21d17e0aa29a63651631ddf5b0605d65f1351e0fbc26b"} Jan 31 04:09:12 crc kubenswrapper[4827]: I0131 04:09:12.176431 4827 generic.go:334] "Generic (PLEG): container finished" podID="ab1251de-0ef5-48f1-b9db-0a68965651cd" containerID="80344761c75942bb693cd7fd41e8e9a1493d06e3e5e228e09bfa552823754ddd" exitCode=0 Jan 31 04:09:12 crc kubenswrapper[4827]: I0131 04:09:12.176478 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" event={"ID":"ab1251de-0ef5-48f1-b9db-0a68965651cd","Type":"ContainerDied","Data":"80344761c75942bb693cd7fd41e8e9a1493d06e3e5e228e09bfa552823754ddd"} Jan 31 04:09:12 crc kubenswrapper[4827]: I0131 04:09:12.176490 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" Jan 31 04:09:12 crc kubenswrapper[4827]: I0131 04:09:12.176515 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-h8gmp" event={"ID":"ab1251de-0ef5-48f1-b9db-0a68965651cd","Type":"ContainerDied","Data":"b31f352abdbe939f281df6bfabadfc18e4983d94758316aef0884ceb7652169f"} Jan 31 04:09:12 crc kubenswrapper[4827]: I0131 04:09:12.176533 4827 scope.go:117] "RemoveContainer" containerID="80344761c75942bb693cd7fd41e8e9a1493d06e3e5e228e09bfa552823754ddd" Jan 31 04:09:12 crc kubenswrapper[4827]: I0131 04:09:12.212295 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-h8gmp"] Jan 31 04:09:12 crc kubenswrapper[4827]: I0131 04:09:12.218739 4827 scope.go:117] "RemoveContainer" containerID="e9695ee2732e95e9c05e12f45cd96a45fd620fb76456e30e6211294ed3348d01" Jan 31 04:09:12 crc kubenswrapper[4827]: I0131 04:09:12.219727 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-h8gmp"] Jan 31 04:09:12 crc kubenswrapper[4827]: I0131 04:09:12.241000 4827 scope.go:117] "RemoveContainer" containerID="80344761c75942bb693cd7fd41e8e9a1493d06e3e5e228e09bfa552823754ddd" Jan 31 04:09:12 crc kubenswrapper[4827]: E0131 04:09:12.241359 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80344761c75942bb693cd7fd41e8e9a1493d06e3e5e228e09bfa552823754ddd\": container with ID starting with 80344761c75942bb693cd7fd41e8e9a1493d06e3e5e228e09bfa552823754ddd not found: ID does not exist" containerID="80344761c75942bb693cd7fd41e8e9a1493d06e3e5e228e09bfa552823754ddd" Jan 31 04:09:12 crc kubenswrapper[4827]: I0131 04:09:12.241393 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80344761c75942bb693cd7fd41e8e9a1493d06e3e5e228e09bfa552823754ddd"} err="failed to get container status \"80344761c75942bb693cd7fd41e8e9a1493d06e3e5e228e09bfa552823754ddd\": rpc error: code = NotFound desc = could not find container \"80344761c75942bb693cd7fd41e8e9a1493d06e3e5e228e09bfa552823754ddd\": container with ID starting with 80344761c75942bb693cd7fd41e8e9a1493d06e3e5e228e09bfa552823754ddd not found: ID does not exist" Jan 31 04:09:12 crc kubenswrapper[4827]: I0131 04:09:12.241417 4827 scope.go:117] "RemoveContainer" containerID="e9695ee2732e95e9c05e12f45cd96a45fd620fb76456e30e6211294ed3348d01" Jan 31 04:09:12 crc kubenswrapper[4827]: E0131 04:09:12.241772 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9695ee2732e95e9c05e12f45cd96a45fd620fb76456e30e6211294ed3348d01\": container with ID starting with e9695ee2732e95e9c05e12f45cd96a45fd620fb76456e30e6211294ed3348d01 not found: ID does not exist" containerID="e9695ee2732e95e9c05e12f45cd96a45fd620fb76456e30e6211294ed3348d01" Jan 31 04:09:12 crc kubenswrapper[4827]: I0131 04:09:12.241789 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9695ee2732e95e9c05e12f45cd96a45fd620fb76456e30e6211294ed3348d01"} err="failed to get container status \"e9695ee2732e95e9c05e12f45cd96a45fd620fb76456e30e6211294ed3348d01\": rpc error: code = NotFound desc = could not find container \"e9695ee2732e95e9c05e12f45cd96a45fd620fb76456e30e6211294ed3348d01\": container with ID starting with e9695ee2732e95e9c05e12f45cd96a45fd620fb76456e30e6211294ed3348d01 not found: ID does not exist" Jan 31 04:09:13 crc kubenswrapper[4827]: I0131 04:09:13.191919 4827 generic.go:334] "Generic (PLEG): container finished" podID="70e687cb-d396-46d3-890a-cd3cbe51186f" containerID="8f7e582c0a2017cb35bcae076e01f7eb52d3df0e80785708e5f57fc2038d853f" exitCode=0 Jan 31 04:09:13 crc kubenswrapper[4827]: I0131 04:09:13.191981 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" event={"ID":"70e687cb-d396-46d3-890a-cd3cbe51186f","Type":"ContainerDied","Data":"8f7e582c0a2017cb35bcae076e01f7eb52d3df0e80785708e5f57fc2038d853f"} Jan 31 04:09:14 crc kubenswrapper[4827]: I0131 04:09:14.138527 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab1251de-0ef5-48f1-b9db-0a68965651cd" path="/var/lib/kubelet/pods/ab1251de-0ef5-48f1-b9db-0a68965651cd/volumes" Jan 31 04:09:14 crc kubenswrapper[4827]: I0131 04:09:14.207162 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" event={"ID":"70e687cb-d396-46d3-890a-cd3cbe51186f","Type":"ContainerStarted","Data":"61482ac2645ba084bdb4df98d0152eea8035ed84ca37867a075defa3823e4b46"} Jan 31 04:09:14 crc kubenswrapper[4827]: I0131 04:09:14.207444 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:14 crc kubenswrapper[4827]: I0131 04:09:14.237184 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" podStartSLOduration=3.237154094 podStartE2EDuration="3.237154094s" podCreationTimestamp="2026-01-31 04:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:09:14.224412762 +0000 UTC m=+1346.911493211" watchObservedRunningTime="2026-01-31 04:09:14.237154094 +0000 UTC m=+1346.924234583" Jan 31 04:09:17 crc kubenswrapper[4827]: I0131 04:09:17.371332 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:09:17 crc kubenswrapper[4827]: I0131 04:09:17.372017 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:09:17 crc kubenswrapper[4827]: I0131 04:09:17.372083 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 04:09:17 crc kubenswrapper[4827]: I0131 04:09:17.373099 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"047ff0edcff47ab439ecf6139d8ba1839619a9b3e0c1bd807d83661af77614a9"} pod="openshift-machine-config-operator/machine-config-daemon-jxh94" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:09:17 crc kubenswrapper[4827]: I0131 04:09:17.373188 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" containerID="cri-o://047ff0edcff47ab439ecf6139d8ba1839619a9b3e0c1bd807d83661af77614a9" gracePeriod=600 Jan 31 04:09:18 crc kubenswrapper[4827]: I0131 04:09:18.247624 4827 generic.go:334] "Generic (PLEG): container finished" podID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerID="047ff0edcff47ab439ecf6139d8ba1839619a9b3e0c1bd807d83661af77614a9" exitCode=0 Jan 31 04:09:18 crc kubenswrapper[4827]: I0131 04:09:18.247705 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerDied","Data":"047ff0edcff47ab439ecf6139d8ba1839619a9b3e0c1bd807d83661af77614a9"} Jan 31 04:09:18 crc kubenswrapper[4827]: I0131 04:09:18.248197 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f"} Jan 31 04:09:18 crc kubenswrapper[4827]: I0131 04:09:18.248226 4827 scope.go:117] "RemoveContainer" containerID="1261fe4f40f38bda861655c66e0801cf569b3be5862d7375b02489d6f6686b06" Jan 31 04:09:21 crc kubenswrapper[4827]: I0131 04:09:21.617419 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:09:21 crc kubenswrapper[4827]: I0131 04:09:21.714194 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-x7j6w"] Jan 31 04:09:21 crc kubenswrapper[4827]: I0131 04:09:21.714484 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" podUID="abeb8ccf-500c-4d34-80c6-6bb9a689447a" containerName="dnsmasq-dns" containerID="cri-o://2d79a917801a8cc76e739ce82775bf1c96e26add575049ecba89981db8a576df" gracePeriod=10 Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.172783 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.234927 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-dns-svc\") pod \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.235021 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-openstack-edpm-ipam\") pod \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.235089 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-config\") pod \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.235180 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlr5c\" (UniqueName: \"kubernetes.io/projected/abeb8ccf-500c-4d34-80c6-6bb9a689447a-kube-api-access-xlr5c\") pod \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.235322 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-ovsdbserver-nb\") pod \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.235375 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-ovsdbserver-sb\") pod \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\" (UID: \"abeb8ccf-500c-4d34-80c6-6bb9a689447a\") " Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.241075 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abeb8ccf-500c-4d34-80c6-6bb9a689447a-kube-api-access-xlr5c" (OuterVolumeSpecName: "kube-api-access-xlr5c") pod "abeb8ccf-500c-4d34-80c6-6bb9a689447a" (UID: "abeb8ccf-500c-4d34-80c6-6bb9a689447a"). InnerVolumeSpecName "kube-api-access-xlr5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.281659 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "abeb8ccf-500c-4d34-80c6-6bb9a689447a" (UID: "abeb8ccf-500c-4d34-80c6-6bb9a689447a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.282142 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "abeb8ccf-500c-4d34-80c6-6bb9a689447a" (UID: "abeb8ccf-500c-4d34-80c6-6bb9a689447a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.285595 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-config" (OuterVolumeSpecName: "config") pod "abeb8ccf-500c-4d34-80c6-6bb9a689447a" (UID: "abeb8ccf-500c-4d34-80c6-6bb9a689447a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.293457 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "abeb8ccf-500c-4d34-80c6-6bb9a689447a" (UID: "abeb8ccf-500c-4d34-80c6-6bb9a689447a"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.294821 4827 generic.go:334] "Generic (PLEG): container finished" podID="abeb8ccf-500c-4d34-80c6-6bb9a689447a" containerID="2d79a917801a8cc76e739ce82775bf1c96e26add575049ecba89981db8a576df" exitCode=0 Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.294860 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" event={"ID":"abeb8ccf-500c-4d34-80c6-6bb9a689447a","Type":"ContainerDied","Data":"2d79a917801a8cc76e739ce82775bf1c96e26add575049ecba89981db8a576df"} Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.294930 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.295013 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-x7j6w" event={"ID":"abeb8ccf-500c-4d34-80c6-6bb9a689447a","Type":"ContainerDied","Data":"ea90f732d80c2bc5405028fd74b2815424198438116399a772e4f4b26b787ee5"} Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.295053 4827 scope.go:117] "RemoveContainer" containerID="2d79a917801a8cc76e739ce82775bf1c96e26add575049ecba89981db8a576df" Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.296063 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "abeb8ccf-500c-4d34-80c6-6bb9a689447a" (UID: "abeb8ccf-500c-4d34-80c6-6bb9a689447a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.358516 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.358559 4827 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.358572 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.358582 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlr5c\" (UniqueName: \"kubernetes.io/projected/abeb8ccf-500c-4d34-80c6-6bb9a689447a-kube-api-access-xlr5c\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.358594 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.358606 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abeb8ccf-500c-4d34-80c6-6bb9a689447a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.390108 4827 scope.go:117] "RemoveContainer" containerID="9c335a79a6b44321cc742f85060f6ecdf52115e67df5a5bf7f5acfe2b0b02bc8" Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.417804 4827 scope.go:117] "RemoveContainer" containerID="2d79a917801a8cc76e739ce82775bf1c96e26add575049ecba89981db8a576df" Jan 31 04:09:22 crc kubenswrapper[4827]: E0131 04:09:22.418278 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d79a917801a8cc76e739ce82775bf1c96e26add575049ecba89981db8a576df\": container with ID starting with 2d79a917801a8cc76e739ce82775bf1c96e26add575049ecba89981db8a576df not found: ID does not exist" containerID="2d79a917801a8cc76e739ce82775bf1c96e26add575049ecba89981db8a576df" Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.418326 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d79a917801a8cc76e739ce82775bf1c96e26add575049ecba89981db8a576df"} err="failed to get container status \"2d79a917801a8cc76e739ce82775bf1c96e26add575049ecba89981db8a576df\": rpc error: code = NotFound desc = could not find container \"2d79a917801a8cc76e739ce82775bf1c96e26add575049ecba89981db8a576df\": container with ID starting with 2d79a917801a8cc76e739ce82775bf1c96e26add575049ecba89981db8a576df not found: ID does not exist" Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.418429 4827 scope.go:117] "RemoveContainer" containerID="9c335a79a6b44321cc742f85060f6ecdf52115e67df5a5bf7f5acfe2b0b02bc8" Jan 31 04:09:22 crc kubenswrapper[4827]: E0131 04:09:22.418720 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c335a79a6b44321cc742f85060f6ecdf52115e67df5a5bf7f5acfe2b0b02bc8\": container with ID starting with 9c335a79a6b44321cc742f85060f6ecdf52115e67df5a5bf7f5acfe2b0b02bc8 not found: ID does not exist" containerID="9c335a79a6b44321cc742f85060f6ecdf52115e67df5a5bf7f5acfe2b0b02bc8" Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.418753 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c335a79a6b44321cc742f85060f6ecdf52115e67df5a5bf7f5acfe2b0b02bc8"} err="failed to get container status \"9c335a79a6b44321cc742f85060f6ecdf52115e67df5a5bf7f5acfe2b0b02bc8\": rpc error: code = NotFound desc = could not find container \"9c335a79a6b44321cc742f85060f6ecdf52115e67df5a5bf7f5acfe2b0b02bc8\": container with ID starting with 9c335a79a6b44321cc742f85060f6ecdf52115e67df5a5bf7f5acfe2b0b02bc8 not found: ID does not exist" Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.625561 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-x7j6w"] Jan 31 04:09:22 crc kubenswrapper[4827]: I0131 04:09:22.632393 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-x7j6w"] Jan 31 04:09:24 crc kubenswrapper[4827]: I0131 04:09:24.126402 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abeb8ccf-500c-4d34-80c6-6bb9a689447a" path="/var/lib/kubelet/pods/abeb8ccf-500c-4d34-80c6-6bb9a689447a/volumes" Jan 31 04:09:31 crc kubenswrapper[4827]: I0131 04:09:31.787756 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw"] Jan 31 04:09:31 crc kubenswrapper[4827]: E0131 04:09:31.789228 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abeb8ccf-500c-4d34-80c6-6bb9a689447a" containerName="init" Jan 31 04:09:31 crc kubenswrapper[4827]: I0131 04:09:31.789258 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="abeb8ccf-500c-4d34-80c6-6bb9a689447a" containerName="init" Jan 31 04:09:31 crc kubenswrapper[4827]: E0131 04:09:31.789292 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab1251de-0ef5-48f1-b9db-0a68965651cd" containerName="dnsmasq-dns" Jan 31 04:09:31 crc kubenswrapper[4827]: I0131 04:09:31.789310 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab1251de-0ef5-48f1-b9db-0a68965651cd" containerName="dnsmasq-dns" Jan 31 04:09:31 crc kubenswrapper[4827]: E0131 04:09:31.789355 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abeb8ccf-500c-4d34-80c6-6bb9a689447a" containerName="dnsmasq-dns" Jan 31 04:09:31 crc kubenswrapper[4827]: I0131 04:09:31.789368 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="abeb8ccf-500c-4d34-80c6-6bb9a689447a" containerName="dnsmasq-dns" Jan 31 04:09:31 crc kubenswrapper[4827]: E0131 04:09:31.789382 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab1251de-0ef5-48f1-b9db-0a68965651cd" containerName="init" Jan 31 04:09:31 crc kubenswrapper[4827]: I0131 04:09:31.789394 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab1251de-0ef5-48f1-b9db-0a68965651cd" containerName="init" Jan 31 04:09:31 crc kubenswrapper[4827]: I0131 04:09:31.789698 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="abeb8ccf-500c-4d34-80c6-6bb9a689447a" containerName="dnsmasq-dns" Jan 31 04:09:31 crc kubenswrapper[4827]: I0131 04:09:31.789727 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab1251de-0ef5-48f1-b9db-0a68965651cd" containerName="dnsmasq-dns" Jan 31 04:09:31 crc kubenswrapper[4827]: I0131 04:09:31.800580 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw" Jan 31 04:09:31 crc kubenswrapper[4827]: I0131 04:09:31.806452 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:09:31 crc kubenswrapper[4827]: I0131 04:09:31.806448 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw"] Jan 31 04:09:31 crc kubenswrapper[4827]: I0131 04:09:31.806866 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:09:31 crc kubenswrapper[4827]: I0131 04:09:31.808940 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:09:31 crc kubenswrapper[4827]: I0131 04:09:31.812684 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:09:31 crc kubenswrapper[4827]: I0131 04:09:31.964956 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0b3fbf5-33c0-49e9-9464-d21a13727047-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw\" (UID: \"d0b3fbf5-33c0-49e9-9464-d21a13727047\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw" Jan 31 04:09:31 crc kubenswrapper[4827]: I0131 04:09:31.965015 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkp2x\" (UniqueName: \"kubernetes.io/projected/d0b3fbf5-33c0-49e9-9464-d21a13727047-kube-api-access-zkp2x\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw\" (UID: \"d0b3fbf5-33c0-49e9-9464-d21a13727047\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw" Jan 31 04:09:31 crc kubenswrapper[4827]: I0131 04:09:31.965052 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0b3fbf5-33c0-49e9-9464-d21a13727047-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw\" (UID: \"d0b3fbf5-33c0-49e9-9464-d21a13727047\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw" Jan 31 04:09:31 crc kubenswrapper[4827]: I0131 04:09:31.965131 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0b3fbf5-33c0-49e9-9464-d21a13727047-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw\" (UID: \"d0b3fbf5-33c0-49e9-9464-d21a13727047\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw" Jan 31 04:09:32 crc kubenswrapper[4827]: I0131 04:09:32.066616 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkp2x\" (UniqueName: \"kubernetes.io/projected/d0b3fbf5-33c0-49e9-9464-d21a13727047-kube-api-access-zkp2x\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw\" (UID: \"d0b3fbf5-33c0-49e9-9464-d21a13727047\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw" Jan 31 04:09:32 crc kubenswrapper[4827]: I0131 04:09:32.066697 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0b3fbf5-33c0-49e9-9464-d21a13727047-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw\" (UID: \"d0b3fbf5-33c0-49e9-9464-d21a13727047\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw" Jan 31 04:09:32 crc kubenswrapper[4827]: I0131 04:09:32.066795 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0b3fbf5-33c0-49e9-9464-d21a13727047-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw\" (UID: \"d0b3fbf5-33c0-49e9-9464-d21a13727047\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw" Jan 31 04:09:32 crc kubenswrapper[4827]: I0131 04:09:32.066915 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0b3fbf5-33c0-49e9-9464-d21a13727047-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw\" (UID: \"d0b3fbf5-33c0-49e9-9464-d21a13727047\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw" Jan 31 04:09:32 crc kubenswrapper[4827]: I0131 04:09:32.073363 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0b3fbf5-33c0-49e9-9464-d21a13727047-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw\" (UID: \"d0b3fbf5-33c0-49e9-9464-d21a13727047\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw" Jan 31 04:09:32 crc kubenswrapper[4827]: I0131 04:09:32.074513 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0b3fbf5-33c0-49e9-9464-d21a13727047-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw\" (UID: \"d0b3fbf5-33c0-49e9-9464-d21a13727047\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw" Jan 31 04:09:32 crc kubenswrapper[4827]: I0131 04:09:32.075566 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0b3fbf5-33c0-49e9-9464-d21a13727047-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw\" (UID: \"d0b3fbf5-33c0-49e9-9464-d21a13727047\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw" Jan 31 04:09:32 crc kubenswrapper[4827]: I0131 04:09:32.085163 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkp2x\" (UniqueName: \"kubernetes.io/projected/d0b3fbf5-33c0-49e9-9464-d21a13727047-kube-api-access-zkp2x\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw\" (UID: \"d0b3fbf5-33c0-49e9-9464-d21a13727047\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw" Jan 31 04:09:32 crc kubenswrapper[4827]: I0131 04:09:32.122464 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw" Jan 31 04:09:32 crc kubenswrapper[4827]: I0131 04:09:32.610861 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw"] Jan 31 04:09:33 crc kubenswrapper[4827]: I0131 04:09:33.417783 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw" event={"ID":"d0b3fbf5-33c0-49e9-9464-d21a13727047","Type":"ContainerStarted","Data":"0e95eb15621659936ab464b1ac12fd2987110abcc5242941b6c06dbfe46ecf84"} Jan 31 04:09:34 crc kubenswrapper[4827]: I0131 04:09:34.430983 4827 generic.go:334] "Generic (PLEG): container finished" podID="bd61984d-518c-44f2-8a18-8bda81bb6af3" containerID="aec4e3d5ec5b9f0eccef39565fa957e9d6f665d7fe2d92f04d29b6a8ee28942f" exitCode=0 Jan 31 04:09:34 crc kubenswrapper[4827]: I0131 04:09:34.431063 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bd61984d-518c-44f2-8a18-8bda81bb6af3","Type":"ContainerDied","Data":"aec4e3d5ec5b9f0eccef39565fa957e9d6f665d7fe2d92f04d29b6a8ee28942f"} Jan 31 04:09:35 crc kubenswrapper[4827]: I0131 04:09:35.444305 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bd61984d-518c-44f2-8a18-8bda81bb6af3","Type":"ContainerStarted","Data":"1812f1adce27205c0ef3114816b8c02482440d0f9509927497dbfea3b890cef2"} Jan 31 04:09:35 crc kubenswrapper[4827]: I0131 04:09:35.444942 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 31 04:09:35 crc kubenswrapper[4827]: I0131 04:09:35.447584 4827 generic.go:334] "Generic (PLEG): container finished" podID="92323497-4fa1-43f6-98b0-08fa31c47d3a" containerID="3569e7d736e69075c54a333c464402a1810657893dedfa13dcc620b6b03a5a71" exitCode=0 Jan 31 04:09:35 crc kubenswrapper[4827]: I0131 04:09:35.447621 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"92323497-4fa1-43f6-98b0-08fa31c47d3a","Type":"ContainerDied","Data":"3569e7d736e69075c54a333c464402a1810657893dedfa13dcc620b6b03a5a71"} Jan 31 04:09:35 crc kubenswrapper[4827]: I0131 04:09:35.476408 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.476390977 podStartE2EDuration="36.476390977s" podCreationTimestamp="2026-01-31 04:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:09:35.47033845 +0000 UTC m=+1368.157418989" watchObservedRunningTime="2026-01-31 04:09:35.476390977 +0000 UTC m=+1368.163471416" Jan 31 04:09:36 crc kubenswrapper[4827]: I0131 04:09:36.458762 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"92323497-4fa1-43f6-98b0-08fa31c47d3a","Type":"ContainerStarted","Data":"e8e202ff0327c4a4dcc0216bcdcefef24fd280f851ee0435c15cf0240917cec8"} Jan 31 04:09:36 crc kubenswrapper[4827]: I0131 04:09:36.459495 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:36 crc kubenswrapper[4827]: I0131 04:09:36.485174 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.485155769 podStartE2EDuration="36.485155769s" podCreationTimestamp="2026-01-31 04:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:09:36.479829035 +0000 UTC m=+1369.166909484" watchObservedRunningTime="2026-01-31 04:09:36.485155769 +0000 UTC m=+1369.172236208" Jan 31 04:09:41 crc kubenswrapper[4827]: I0131 04:09:41.502049 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw" event={"ID":"d0b3fbf5-33c0-49e9-9464-d21a13727047","Type":"ContainerStarted","Data":"b00ccd405aad1863b5ca1fd98b7053522b0abaa6e00a4c59460fff53423f1bd1"} Jan 31 04:09:41 crc kubenswrapper[4827]: I0131 04:09:41.521433 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw" podStartSLOduration=2.027844481 podStartE2EDuration="10.5214085s" podCreationTimestamp="2026-01-31 04:09:31 +0000 UTC" firstStartedPulling="2026-01-31 04:09:32.614435985 +0000 UTC m=+1365.301516434" lastFinishedPulling="2026-01-31 04:09:41.108000004 +0000 UTC m=+1373.795080453" observedRunningTime="2026-01-31 04:09:41.518213162 +0000 UTC m=+1374.205293611" watchObservedRunningTime="2026-01-31 04:09:41.5214085 +0000 UTC m=+1374.208489019" Jan 31 04:09:49 crc kubenswrapper[4827]: I0131 04:09:49.776787 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 31 04:09:50 crc kubenswrapper[4827]: I0131 04:09:50.468734 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:09:52 crc kubenswrapper[4827]: I0131 04:09:52.611851 4827 generic.go:334] "Generic (PLEG): container finished" podID="d0b3fbf5-33c0-49e9-9464-d21a13727047" containerID="b00ccd405aad1863b5ca1fd98b7053522b0abaa6e00a4c59460fff53423f1bd1" exitCode=0 Jan 31 04:09:52 crc kubenswrapper[4827]: I0131 04:09:52.611992 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw" event={"ID":"d0b3fbf5-33c0-49e9-9464-d21a13727047","Type":"ContainerDied","Data":"b00ccd405aad1863b5ca1fd98b7053522b0abaa6e00a4c59460fff53423f1bd1"} Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.105170 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.214813 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0b3fbf5-33c0-49e9-9464-d21a13727047-inventory\") pod \"d0b3fbf5-33c0-49e9-9464-d21a13727047\" (UID: \"d0b3fbf5-33c0-49e9-9464-d21a13727047\") " Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.214910 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0b3fbf5-33c0-49e9-9464-d21a13727047-repo-setup-combined-ca-bundle\") pod \"d0b3fbf5-33c0-49e9-9464-d21a13727047\" (UID: \"d0b3fbf5-33c0-49e9-9464-d21a13727047\") " Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.214944 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0b3fbf5-33c0-49e9-9464-d21a13727047-ssh-key-openstack-edpm-ipam\") pod \"d0b3fbf5-33c0-49e9-9464-d21a13727047\" (UID: \"d0b3fbf5-33c0-49e9-9464-d21a13727047\") " Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.215001 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkp2x\" (UniqueName: \"kubernetes.io/projected/d0b3fbf5-33c0-49e9-9464-d21a13727047-kube-api-access-zkp2x\") pod \"d0b3fbf5-33c0-49e9-9464-d21a13727047\" (UID: \"d0b3fbf5-33c0-49e9-9464-d21a13727047\") " Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.220570 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0b3fbf5-33c0-49e9-9464-d21a13727047-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d0b3fbf5-33c0-49e9-9464-d21a13727047" (UID: "d0b3fbf5-33c0-49e9-9464-d21a13727047"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.222058 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0b3fbf5-33c0-49e9-9464-d21a13727047-kube-api-access-zkp2x" (OuterVolumeSpecName: "kube-api-access-zkp2x") pod "d0b3fbf5-33c0-49e9-9464-d21a13727047" (UID: "d0b3fbf5-33c0-49e9-9464-d21a13727047"). InnerVolumeSpecName "kube-api-access-zkp2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.240586 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0b3fbf5-33c0-49e9-9464-d21a13727047-inventory" (OuterVolumeSpecName: "inventory") pod "d0b3fbf5-33c0-49e9-9464-d21a13727047" (UID: "d0b3fbf5-33c0-49e9-9464-d21a13727047"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.254692 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0b3fbf5-33c0-49e9-9464-d21a13727047-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d0b3fbf5-33c0-49e9-9464-d21a13727047" (UID: "d0b3fbf5-33c0-49e9-9464-d21a13727047"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.316784 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0b3fbf5-33c0-49e9-9464-d21a13727047-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.316818 4827 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0b3fbf5-33c0-49e9-9464-d21a13727047-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.316828 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0b3fbf5-33c0-49e9-9464-d21a13727047-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.316841 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkp2x\" (UniqueName: \"kubernetes.io/projected/d0b3fbf5-33c0-49e9-9464-d21a13727047-kube-api-access-zkp2x\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.637406 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw" event={"ID":"d0b3fbf5-33c0-49e9-9464-d21a13727047","Type":"ContainerDied","Data":"0e95eb15621659936ab464b1ac12fd2987110abcc5242941b6c06dbfe46ecf84"} Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.637754 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e95eb15621659936ab464b1ac12fd2987110abcc5242941b6c06dbfe46ecf84" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.637451 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.696560 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m"] Jan 31 04:09:54 crc kubenswrapper[4827]: E0131 04:09:54.697031 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b3fbf5-33c0-49e9-9464-d21a13727047" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.697053 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b3fbf5-33c0-49e9-9464-d21a13727047" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.697259 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b3fbf5-33c0-49e9-9464-d21a13727047" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.700375 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.702565 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.702801 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.702978 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.706324 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.712857 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m"] Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.824504 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2cpx\" (UniqueName: \"kubernetes.io/projected/e7919b7b-6239-444c-9da1-8dedcee8ca3e-kube-api-access-j2cpx\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2899m\" (UID: \"e7919b7b-6239-444c-9da1-8dedcee8ca3e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.824589 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7919b7b-6239-444c-9da1-8dedcee8ca3e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2899m\" (UID: \"e7919b7b-6239-444c-9da1-8dedcee8ca3e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.824622 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7919b7b-6239-444c-9da1-8dedcee8ca3e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2899m\" (UID: \"e7919b7b-6239-444c-9da1-8dedcee8ca3e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.824646 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7919b7b-6239-444c-9da1-8dedcee8ca3e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2899m\" (UID: \"e7919b7b-6239-444c-9da1-8dedcee8ca3e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.926607 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2cpx\" (UniqueName: \"kubernetes.io/projected/e7919b7b-6239-444c-9da1-8dedcee8ca3e-kube-api-access-j2cpx\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2899m\" (UID: \"e7919b7b-6239-444c-9da1-8dedcee8ca3e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.926675 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7919b7b-6239-444c-9da1-8dedcee8ca3e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2899m\" (UID: \"e7919b7b-6239-444c-9da1-8dedcee8ca3e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.926704 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7919b7b-6239-444c-9da1-8dedcee8ca3e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2899m\" (UID: \"e7919b7b-6239-444c-9da1-8dedcee8ca3e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.926729 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7919b7b-6239-444c-9da1-8dedcee8ca3e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2899m\" (UID: \"e7919b7b-6239-444c-9da1-8dedcee8ca3e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.933067 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7919b7b-6239-444c-9da1-8dedcee8ca3e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2899m\" (UID: \"e7919b7b-6239-444c-9da1-8dedcee8ca3e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.933605 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7919b7b-6239-444c-9da1-8dedcee8ca3e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2899m\" (UID: \"e7919b7b-6239-444c-9da1-8dedcee8ca3e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.935165 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7919b7b-6239-444c-9da1-8dedcee8ca3e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2899m\" (UID: \"e7919b7b-6239-444c-9da1-8dedcee8ca3e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m" Jan 31 04:09:54 crc kubenswrapper[4827]: I0131 04:09:54.950798 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2cpx\" (UniqueName: \"kubernetes.io/projected/e7919b7b-6239-444c-9da1-8dedcee8ca3e-kube-api-access-j2cpx\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2899m\" (UID: \"e7919b7b-6239-444c-9da1-8dedcee8ca3e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m" Jan 31 04:09:55 crc kubenswrapper[4827]: I0131 04:09:55.030959 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m" Jan 31 04:09:55 crc kubenswrapper[4827]: I0131 04:09:55.591549 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m"] Jan 31 04:09:55 crc kubenswrapper[4827]: I0131 04:09:55.652039 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m" event={"ID":"e7919b7b-6239-444c-9da1-8dedcee8ca3e","Type":"ContainerStarted","Data":"adb5c4695e9a1a21a73fba448e9e19657c9cc2f83420a3770489c8806450f3f4"} Jan 31 04:09:56 crc kubenswrapper[4827]: I0131 04:09:56.663071 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m" event={"ID":"e7919b7b-6239-444c-9da1-8dedcee8ca3e","Type":"ContainerStarted","Data":"08e0116fbf3a38e5da7bc6a5299c294add6dec9fa7bc6b56cfad318252f0235a"} Jan 31 04:09:58 crc kubenswrapper[4827]: I0131 04:09:58.333278 4827 scope.go:117] "RemoveContainer" containerID="12fe2b54071f9726320953b5647ef853505a4030977779bf309ca7bddd85c632" Jan 31 04:10:35 crc kubenswrapper[4827]: I0131 04:10:35.205325 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m" podStartSLOduration=40.816068267 podStartE2EDuration="41.205295438s" podCreationTimestamp="2026-01-31 04:09:54 +0000 UTC" firstStartedPulling="2026-01-31 04:09:55.588836317 +0000 UTC m=+1388.275916766" lastFinishedPulling="2026-01-31 04:09:55.978063498 +0000 UTC m=+1388.665143937" observedRunningTime="2026-01-31 04:09:56.684831619 +0000 UTC m=+1389.371912068" watchObservedRunningTime="2026-01-31 04:10:35.205295438 +0000 UTC m=+1427.892375967" Jan 31 04:10:35 crc kubenswrapper[4827]: I0131 04:10:35.230436 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5wbhc"] Jan 31 04:10:35 crc kubenswrapper[4827]: I0131 04:10:35.234661 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5wbhc" Jan 31 04:10:35 crc kubenswrapper[4827]: I0131 04:10:35.259884 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5wbhc"] Jan 31 04:10:35 crc kubenswrapper[4827]: I0131 04:10:35.316973 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szhnl\" (UniqueName: \"kubernetes.io/projected/ba8a9812-004c-4d41-989d-3a460f7966d5-kube-api-access-szhnl\") pod \"redhat-operators-5wbhc\" (UID: \"ba8a9812-004c-4d41-989d-3a460f7966d5\") " pod="openshift-marketplace/redhat-operators-5wbhc" Jan 31 04:10:35 crc kubenswrapper[4827]: I0131 04:10:35.317091 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba8a9812-004c-4d41-989d-3a460f7966d5-utilities\") pod \"redhat-operators-5wbhc\" (UID: \"ba8a9812-004c-4d41-989d-3a460f7966d5\") " pod="openshift-marketplace/redhat-operators-5wbhc" Jan 31 04:10:35 crc kubenswrapper[4827]: I0131 04:10:35.317155 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba8a9812-004c-4d41-989d-3a460f7966d5-catalog-content\") pod \"redhat-operators-5wbhc\" (UID: \"ba8a9812-004c-4d41-989d-3a460f7966d5\") " pod="openshift-marketplace/redhat-operators-5wbhc" Jan 31 04:10:35 crc kubenswrapper[4827]: I0131 04:10:35.418809 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba8a9812-004c-4d41-989d-3a460f7966d5-utilities\") pod \"redhat-operators-5wbhc\" (UID: \"ba8a9812-004c-4d41-989d-3a460f7966d5\") " pod="openshift-marketplace/redhat-operators-5wbhc" Jan 31 04:10:35 crc kubenswrapper[4827]: I0131 04:10:35.418868 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba8a9812-004c-4d41-989d-3a460f7966d5-catalog-content\") pod \"redhat-operators-5wbhc\" (UID: \"ba8a9812-004c-4d41-989d-3a460f7966d5\") " pod="openshift-marketplace/redhat-operators-5wbhc" Jan 31 04:10:35 crc kubenswrapper[4827]: I0131 04:10:35.418982 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szhnl\" (UniqueName: \"kubernetes.io/projected/ba8a9812-004c-4d41-989d-3a460f7966d5-kube-api-access-szhnl\") pod \"redhat-operators-5wbhc\" (UID: \"ba8a9812-004c-4d41-989d-3a460f7966d5\") " pod="openshift-marketplace/redhat-operators-5wbhc" Jan 31 04:10:35 crc kubenswrapper[4827]: I0131 04:10:35.419567 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba8a9812-004c-4d41-989d-3a460f7966d5-catalog-content\") pod \"redhat-operators-5wbhc\" (UID: \"ba8a9812-004c-4d41-989d-3a460f7966d5\") " pod="openshift-marketplace/redhat-operators-5wbhc" Jan 31 04:10:35 crc kubenswrapper[4827]: I0131 04:10:35.419949 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba8a9812-004c-4d41-989d-3a460f7966d5-utilities\") pod \"redhat-operators-5wbhc\" (UID: \"ba8a9812-004c-4d41-989d-3a460f7966d5\") " pod="openshift-marketplace/redhat-operators-5wbhc" Jan 31 04:10:35 crc kubenswrapper[4827]: I0131 04:10:35.452695 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szhnl\" (UniqueName: \"kubernetes.io/projected/ba8a9812-004c-4d41-989d-3a460f7966d5-kube-api-access-szhnl\") pod \"redhat-operators-5wbhc\" (UID: \"ba8a9812-004c-4d41-989d-3a460f7966d5\") " pod="openshift-marketplace/redhat-operators-5wbhc" Jan 31 04:10:35 crc kubenswrapper[4827]: I0131 04:10:35.568589 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5wbhc" Jan 31 04:10:36 crc kubenswrapper[4827]: I0131 04:10:36.036345 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5wbhc"] Jan 31 04:10:36 crc kubenswrapper[4827]: I0131 04:10:36.088279 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wbhc" event={"ID":"ba8a9812-004c-4d41-989d-3a460f7966d5","Type":"ContainerStarted","Data":"b1f46303a64a2745b5fe65c766585e4c5a84aad8f1f67dcd88f2fc41480fde22"} Jan 31 04:10:37 crc kubenswrapper[4827]: I0131 04:10:37.101192 4827 generic.go:334] "Generic (PLEG): container finished" podID="ba8a9812-004c-4d41-989d-3a460f7966d5" containerID="aac6895f16e6ca3a76306dcc7dde16d387ed09a2e7b59034d24ed753e09bb2d6" exitCode=0 Jan 31 04:10:37 crc kubenswrapper[4827]: I0131 04:10:37.101270 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wbhc" event={"ID":"ba8a9812-004c-4d41-989d-3a460f7966d5","Type":"ContainerDied","Data":"aac6895f16e6ca3a76306dcc7dde16d387ed09a2e7b59034d24ed753e09bb2d6"} Jan 31 04:10:38 crc kubenswrapper[4827]: I0131 04:10:38.126803 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wbhc" event={"ID":"ba8a9812-004c-4d41-989d-3a460f7966d5","Type":"ContainerStarted","Data":"a31f84c5dad8bc5613c65c8cd40b8171ea82139370284130ca9f6c49854d9d5a"} Jan 31 04:10:40 crc kubenswrapper[4827]: I0131 04:10:40.133949 4827 generic.go:334] "Generic (PLEG): container finished" podID="ba8a9812-004c-4d41-989d-3a460f7966d5" containerID="a31f84c5dad8bc5613c65c8cd40b8171ea82139370284130ca9f6c49854d9d5a" exitCode=0 Jan 31 04:10:40 crc kubenswrapper[4827]: I0131 04:10:40.134053 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wbhc" event={"ID":"ba8a9812-004c-4d41-989d-3a460f7966d5","Type":"ContainerDied","Data":"a31f84c5dad8bc5613c65c8cd40b8171ea82139370284130ca9f6c49854d9d5a"} Jan 31 04:10:41 crc kubenswrapper[4827]: I0131 04:10:41.146908 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wbhc" event={"ID":"ba8a9812-004c-4d41-989d-3a460f7966d5","Type":"ContainerStarted","Data":"5082fc180971a904997298a1310825765e30c350eab1f273970267c823cda901"} Jan 31 04:10:41 crc kubenswrapper[4827]: I0131 04:10:41.179463 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5wbhc" podStartSLOduration=2.766165655 podStartE2EDuration="6.179434796s" podCreationTimestamp="2026-01-31 04:10:35 +0000 UTC" firstStartedPulling="2026-01-31 04:10:37.102789452 +0000 UTC m=+1429.789869901" lastFinishedPulling="2026-01-31 04:10:40.516058593 +0000 UTC m=+1433.203139042" observedRunningTime="2026-01-31 04:10:41.171545663 +0000 UTC m=+1433.858626122" watchObservedRunningTime="2026-01-31 04:10:41.179434796 +0000 UTC m=+1433.866515255" Jan 31 04:10:45 crc kubenswrapper[4827]: I0131 04:10:45.569614 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5wbhc" Jan 31 04:10:45 crc kubenswrapper[4827]: I0131 04:10:45.569675 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5wbhc" Jan 31 04:10:46 crc kubenswrapper[4827]: I0131 04:10:46.613338 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5wbhc" podUID="ba8a9812-004c-4d41-989d-3a460f7966d5" containerName="registry-server" probeResult="failure" output=< Jan 31 04:10:46 crc kubenswrapper[4827]: timeout: failed to connect service ":50051" within 1s Jan 31 04:10:46 crc kubenswrapper[4827]: > Jan 31 04:10:55 crc kubenswrapper[4827]: I0131 04:10:55.663827 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5wbhc" Jan 31 04:10:55 crc kubenswrapper[4827]: I0131 04:10:55.741166 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5wbhc" Jan 31 04:10:55 crc kubenswrapper[4827]: I0131 04:10:55.924721 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5wbhc"] Jan 31 04:10:57 crc kubenswrapper[4827]: I0131 04:10:57.330162 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5wbhc" podUID="ba8a9812-004c-4d41-989d-3a460f7966d5" containerName="registry-server" containerID="cri-o://5082fc180971a904997298a1310825765e30c350eab1f273970267c823cda901" gracePeriod=2 Jan 31 04:10:57 crc kubenswrapper[4827]: I0131 04:10:57.737469 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5wbhc" Jan 31 04:10:57 crc kubenswrapper[4827]: I0131 04:10:57.871043 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba8a9812-004c-4d41-989d-3a460f7966d5-utilities\") pod \"ba8a9812-004c-4d41-989d-3a460f7966d5\" (UID: \"ba8a9812-004c-4d41-989d-3a460f7966d5\") " Jan 31 04:10:57 crc kubenswrapper[4827]: I0131 04:10:57.872157 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szhnl\" (UniqueName: \"kubernetes.io/projected/ba8a9812-004c-4d41-989d-3a460f7966d5-kube-api-access-szhnl\") pod \"ba8a9812-004c-4d41-989d-3a460f7966d5\" (UID: \"ba8a9812-004c-4d41-989d-3a460f7966d5\") " Jan 31 04:10:57 crc kubenswrapper[4827]: I0131 04:10:57.872375 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba8a9812-004c-4d41-989d-3a460f7966d5-utilities" (OuterVolumeSpecName: "utilities") pod "ba8a9812-004c-4d41-989d-3a460f7966d5" (UID: "ba8a9812-004c-4d41-989d-3a460f7966d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:10:57 crc kubenswrapper[4827]: I0131 04:10:57.872436 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba8a9812-004c-4d41-989d-3a460f7966d5-catalog-content\") pod \"ba8a9812-004c-4d41-989d-3a460f7966d5\" (UID: \"ba8a9812-004c-4d41-989d-3a460f7966d5\") " Jan 31 04:10:57 crc kubenswrapper[4827]: I0131 04:10:57.873281 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba8a9812-004c-4d41-989d-3a460f7966d5-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:57 crc kubenswrapper[4827]: I0131 04:10:57.885090 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba8a9812-004c-4d41-989d-3a460f7966d5-kube-api-access-szhnl" (OuterVolumeSpecName: "kube-api-access-szhnl") pod "ba8a9812-004c-4d41-989d-3a460f7966d5" (UID: "ba8a9812-004c-4d41-989d-3a460f7966d5"). InnerVolumeSpecName "kube-api-access-szhnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:10:57 crc kubenswrapper[4827]: I0131 04:10:57.976058 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szhnl\" (UniqueName: \"kubernetes.io/projected/ba8a9812-004c-4d41-989d-3a460f7966d5-kube-api-access-szhnl\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:57 crc kubenswrapper[4827]: I0131 04:10:57.993157 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba8a9812-004c-4d41-989d-3a460f7966d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba8a9812-004c-4d41-989d-3a460f7966d5" (UID: "ba8a9812-004c-4d41-989d-3a460f7966d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:10:58 crc kubenswrapper[4827]: I0131 04:10:58.077698 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba8a9812-004c-4d41-989d-3a460f7966d5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:58 crc kubenswrapper[4827]: I0131 04:10:58.341206 4827 generic.go:334] "Generic (PLEG): container finished" podID="ba8a9812-004c-4d41-989d-3a460f7966d5" containerID="5082fc180971a904997298a1310825765e30c350eab1f273970267c823cda901" exitCode=0 Jan 31 04:10:58 crc kubenswrapper[4827]: I0131 04:10:58.341270 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wbhc" event={"ID":"ba8a9812-004c-4d41-989d-3a460f7966d5","Type":"ContainerDied","Data":"5082fc180971a904997298a1310825765e30c350eab1f273970267c823cda901"} Jan 31 04:10:58 crc kubenswrapper[4827]: I0131 04:10:58.341308 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5wbhc" Jan 31 04:10:58 crc kubenswrapper[4827]: I0131 04:10:58.341512 4827 scope.go:117] "RemoveContainer" containerID="5082fc180971a904997298a1310825765e30c350eab1f273970267c823cda901" Jan 31 04:10:58 crc kubenswrapper[4827]: I0131 04:10:58.341497 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wbhc" event={"ID":"ba8a9812-004c-4d41-989d-3a460f7966d5","Type":"ContainerDied","Data":"b1f46303a64a2745b5fe65c766585e4c5a84aad8f1f67dcd88f2fc41480fde22"} Jan 31 04:10:58 crc kubenswrapper[4827]: I0131 04:10:58.367531 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5wbhc"] Jan 31 04:10:58 crc kubenswrapper[4827]: I0131 04:10:58.369139 4827 scope.go:117] "RemoveContainer" containerID="a31f84c5dad8bc5613c65c8cd40b8171ea82139370284130ca9f6c49854d9d5a" Jan 31 04:10:58 crc kubenswrapper[4827]: I0131 04:10:58.375964 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5wbhc"] Jan 31 04:10:58 crc kubenswrapper[4827]: I0131 04:10:58.391074 4827 scope.go:117] "RemoveContainer" containerID="aac6895f16e6ca3a76306dcc7dde16d387ed09a2e7b59034d24ed753e09bb2d6" Jan 31 04:10:58 crc kubenswrapper[4827]: I0131 04:10:58.424354 4827 scope.go:117] "RemoveContainer" containerID="5082fc180971a904997298a1310825765e30c350eab1f273970267c823cda901" Jan 31 04:10:58 crc kubenswrapper[4827]: E0131 04:10:58.424807 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5082fc180971a904997298a1310825765e30c350eab1f273970267c823cda901\": container with ID starting with 5082fc180971a904997298a1310825765e30c350eab1f273970267c823cda901 not found: ID does not exist" containerID="5082fc180971a904997298a1310825765e30c350eab1f273970267c823cda901" Jan 31 04:10:58 crc kubenswrapper[4827]: I0131 04:10:58.424855 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5082fc180971a904997298a1310825765e30c350eab1f273970267c823cda901"} err="failed to get container status \"5082fc180971a904997298a1310825765e30c350eab1f273970267c823cda901\": rpc error: code = NotFound desc = could not find container \"5082fc180971a904997298a1310825765e30c350eab1f273970267c823cda901\": container with ID starting with 5082fc180971a904997298a1310825765e30c350eab1f273970267c823cda901 not found: ID does not exist" Jan 31 04:10:58 crc kubenswrapper[4827]: I0131 04:10:58.424909 4827 scope.go:117] "RemoveContainer" containerID="a31f84c5dad8bc5613c65c8cd40b8171ea82139370284130ca9f6c49854d9d5a" Jan 31 04:10:58 crc kubenswrapper[4827]: E0131 04:10:58.425339 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a31f84c5dad8bc5613c65c8cd40b8171ea82139370284130ca9f6c49854d9d5a\": container with ID starting with a31f84c5dad8bc5613c65c8cd40b8171ea82139370284130ca9f6c49854d9d5a not found: ID does not exist" containerID="a31f84c5dad8bc5613c65c8cd40b8171ea82139370284130ca9f6c49854d9d5a" Jan 31 04:10:58 crc kubenswrapper[4827]: I0131 04:10:58.425388 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a31f84c5dad8bc5613c65c8cd40b8171ea82139370284130ca9f6c49854d9d5a"} err="failed to get container status \"a31f84c5dad8bc5613c65c8cd40b8171ea82139370284130ca9f6c49854d9d5a\": rpc error: code = NotFound desc = could not find container \"a31f84c5dad8bc5613c65c8cd40b8171ea82139370284130ca9f6c49854d9d5a\": container with ID starting with a31f84c5dad8bc5613c65c8cd40b8171ea82139370284130ca9f6c49854d9d5a not found: ID does not exist" Jan 31 04:10:58 crc kubenswrapper[4827]: I0131 04:10:58.425422 4827 scope.go:117] "RemoveContainer" containerID="aac6895f16e6ca3a76306dcc7dde16d387ed09a2e7b59034d24ed753e09bb2d6" Jan 31 04:10:58 crc kubenswrapper[4827]: E0131 04:10:58.425725 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aac6895f16e6ca3a76306dcc7dde16d387ed09a2e7b59034d24ed753e09bb2d6\": container with ID starting with aac6895f16e6ca3a76306dcc7dde16d387ed09a2e7b59034d24ed753e09bb2d6 not found: ID does not exist" containerID="aac6895f16e6ca3a76306dcc7dde16d387ed09a2e7b59034d24ed753e09bb2d6" Jan 31 04:10:58 crc kubenswrapper[4827]: I0131 04:10:58.425765 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aac6895f16e6ca3a76306dcc7dde16d387ed09a2e7b59034d24ed753e09bb2d6"} err="failed to get container status \"aac6895f16e6ca3a76306dcc7dde16d387ed09a2e7b59034d24ed753e09bb2d6\": rpc error: code = NotFound desc = could not find container \"aac6895f16e6ca3a76306dcc7dde16d387ed09a2e7b59034d24ed753e09bb2d6\": container with ID starting with aac6895f16e6ca3a76306dcc7dde16d387ed09a2e7b59034d24ed753e09bb2d6 not found: ID does not exist" Jan 31 04:10:58 crc kubenswrapper[4827]: I0131 04:10:58.442970 4827 scope.go:117] "RemoveContainer" containerID="9c8a06745330142ba4cebfe3b7c45ec95048a1917a3d60fc36d35421468a23f6" Jan 31 04:10:58 crc kubenswrapper[4827]: I0131 04:10:58.490536 4827 scope.go:117] "RemoveContainer" containerID="3bef3482c6b5a8881586f34defacaf25b3b594be37c355711b648b85efc6694b" Jan 31 04:10:58 crc kubenswrapper[4827]: I0131 04:10:58.527023 4827 scope.go:117] "RemoveContainer" containerID="eaca7c5fe002a871bef4d3e604059e1cd3aea64dd58015175cd887eb1497b35c" Jan 31 04:10:58 crc kubenswrapper[4827]: I0131 04:10:58.549188 4827 scope.go:117] "RemoveContainer" containerID="e9e050e91be2e39f24268cfdaccc9695df83bb4d89156713e783315b16fb5419" Jan 31 04:10:58 crc kubenswrapper[4827]: I0131 04:10:58.595917 4827 scope.go:117] "RemoveContainer" containerID="d27d8c56ccddb6fefc46a276a6c005c968705931bad7da365a2488c039067002" Jan 31 04:11:00 crc kubenswrapper[4827]: I0131 04:11:00.128553 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba8a9812-004c-4d41-989d-3a460f7966d5" path="/var/lib/kubelet/pods/ba8a9812-004c-4d41-989d-3a460f7966d5/volumes" Jan 31 04:11:17 crc kubenswrapper[4827]: I0131 04:11:17.371226 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:11:17 crc kubenswrapper[4827]: I0131 04:11:17.373630 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:11:47 crc kubenswrapper[4827]: I0131 04:11:47.375239 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:11:47 crc kubenswrapper[4827]: I0131 04:11:47.376232 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:11:58 crc kubenswrapper[4827]: I0131 04:11:58.736409 4827 scope.go:117] "RemoveContainer" containerID="77b72291392212ba7d5663aba4e274c6c3a6c60ccc03e61bc94f0eb194070372" Jan 31 04:11:58 crc kubenswrapper[4827]: I0131 04:11:58.770129 4827 scope.go:117] "RemoveContainer" containerID="7a281abc16213495565a99d8a9ab5e3e6460c7de2d03dcc68200d40b4559bbfa" Jan 31 04:11:58 crc kubenswrapper[4827]: I0131 04:11:58.789483 4827 scope.go:117] "RemoveContainer" containerID="874eef342144c57c19df2425fb3922dd25e57f3cb0fe1328ddf7e019117b8980" Jan 31 04:12:17 crc kubenswrapper[4827]: I0131 04:12:17.371704 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:12:17 crc kubenswrapper[4827]: I0131 04:12:17.372463 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:12:17 crc kubenswrapper[4827]: I0131 04:12:17.372530 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 04:12:17 crc kubenswrapper[4827]: I0131 04:12:17.373570 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f"} pod="openshift-machine-config-operator/machine-config-daemon-jxh94" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:12:17 crc kubenswrapper[4827]: I0131 04:12:17.373656 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" containerID="cri-o://90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" gracePeriod=600 Jan 31 04:12:17 crc kubenswrapper[4827]: E0131 04:12:17.496278 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:12:18 crc kubenswrapper[4827]: I0131 04:12:18.184296 4827 generic.go:334] "Generic (PLEG): container finished" podID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" exitCode=0 Jan 31 04:12:18 crc kubenswrapper[4827]: I0131 04:12:18.184370 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerDied","Data":"90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f"} Jan 31 04:12:18 crc kubenswrapper[4827]: I0131 04:12:18.184461 4827 scope.go:117] "RemoveContainer" containerID="047ff0edcff47ab439ecf6139d8ba1839619a9b3e0c1bd807d83661af77614a9" Jan 31 04:12:18 crc kubenswrapper[4827]: I0131 04:12:18.185304 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:12:18 crc kubenswrapper[4827]: E0131 04:12:18.186007 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:12:33 crc kubenswrapper[4827]: I0131 04:12:33.110736 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:12:33 crc kubenswrapper[4827]: E0131 04:12:33.111906 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:12:44 crc kubenswrapper[4827]: I0131 04:12:44.110289 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:12:44 crc kubenswrapper[4827]: E0131 04:12:44.111027 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:12:56 crc kubenswrapper[4827]: I0131 04:12:56.725797 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bzjxk"] Jan 31 04:12:56 crc kubenswrapper[4827]: E0131 04:12:56.726694 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8a9812-004c-4d41-989d-3a460f7966d5" containerName="extract-utilities" Jan 31 04:12:56 crc kubenswrapper[4827]: I0131 04:12:56.726709 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8a9812-004c-4d41-989d-3a460f7966d5" containerName="extract-utilities" Jan 31 04:12:56 crc kubenswrapper[4827]: E0131 04:12:56.726720 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8a9812-004c-4d41-989d-3a460f7966d5" containerName="extract-content" Jan 31 04:12:56 crc kubenswrapper[4827]: I0131 04:12:56.726727 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8a9812-004c-4d41-989d-3a460f7966d5" containerName="extract-content" Jan 31 04:12:56 crc kubenswrapper[4827]: E0131 04:12:56.726736 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8a9812-004c-4d41-989d-3a460f7966d5" containerName="registry-server" Jan 31 04:12:56 crc kubenswrapper[4827]: I0131 04:12:56.726743 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8a9812-004c-4d41-989d-3a460f7966d5" containerName="registry-server" Jan 31 04:12:56 crc kubenswrapper[4827]: I0131 04:12:56.726948 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8a9812-004c-4d41-989d-3a460f7966d5" containerName="registry-server" Jan 31 04:12:56 crc kubenswrapper[4827]: I0131 04:12:56.728454 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzjxk" Jan 31 04:12:56 crc kubenswrapper[4827]: I0131 04:12:56.766685 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzjxk"] Jan 31 04:12:56 crc kubenswrapper[4827]: I0131 04:12:56.816596 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca439434-1fb9-4a99-8b1a-4f72135ba956-utilities\") pod \"redhat-marketplace-bzjxk\" (UID: \"ca439434-1fb9-4a99-8b1a-4f72135ba956\") " pod="openshift-marketplace/redhat-marketplace-bzjxk" Jan 31 04:12:56 crc kubenswrapper[4827]: I0131 04:12:56.817043 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca439434-1fb9-4a99-8b1a-4f72135ba956-catalog-content\") pod \"redhat-marketplace-bzjxk\" (UID: \"ca439434-1fb9-4a99-8b1a-4f72135ba956\") " pod="openshift-marketplace/redhat-marketplace-bzjxk" Jan 31 04:12:56 crc kubenswrapper[4827]: I0131 04:12:56.817225 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86bpg\" (UniqueName: \"kubernetes.io/projected/ca439434-1fb9-4a99-8b1a-4f72135ba956-kube-api-access-86bpg\") pod \"redhat-marketplace-bzjxk\" (UID: \"ca439434-1fb9-4a99-8b1a-4f72135ba956\") " pod="openshift-marketplace/redhat-marketplace-bzjxk" Jan 31 04:12:56 crc kubenswrapper[4827]: I0131 04:12:56.918993 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca439434-1fb9-4a99-8b1a-4f72135ba956-catalog-content\") pod \"redhat-marketplace-bzjxk\" (UID: \"ca439434-1fb9-4a99-8b1a-4f72135ba956\") " pod="openshift-marketplace/redhat-marketplace-bzjxk" Jan 31 04:12:56 crc kubenswrapper[4827]: I0131 04:12:56.919073 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86bpg\" (UniqueName: \"kubernetes.io/projected/ca439434-1fb9-4a99-8b1a-4f72135ba956-kube-api-access-86bpg\") pod \"redhat-marketplace-bzjxk\" (UID: \"ca439434-1fb9-4a99-8b1a-4f72135ba956\") " pod="openshift-marketplace/redhat-marketplace-bzjxk" Jan 31 04:12:56 crc kubenswrapper[4827]: I0131 04:12:56.919124 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca439434-1fb9-4a99-8b1a-4f72135ba956-utilities\") pod \"redhat-marketplace-bzjxk\" (UID: \"ca439434-1fb9-4a99-8b1a-4f72135ba956\") " pod="openshift-marketplace/redhat-marketplace-bzjxk" Jan 31 04:12:56 crc kubenswrapper[4827]: I0131 04:12:56.919526 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca439434-1fb9-4a99-8b1a-4f72135ba956-catalog-content\") pod \"redhat-marketplace-bzjxk\" (UID: \"ca439434-1fb9-4a99-8b1a-4f72135ba956\") " pod="openshift-marketplace/redhat-marketplace-bzjxk" Jan 31 04:12:56 crc kubenswrapper[4827]: I0131 04:12:56.919577 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca439434-1fb9-4a99-8b1a-4f72135ba956-utilities\") pod \"redhat-marketplace-bzjxk\" (UID: \"ca439434-1fb9-4a99-8b1a-4f72135ba956\") " pod="openshift-marketplace/redhat-marketplace-bzjxk" Jan 31 04:12:56 crc kubenswrapper[4827]: I0131 04:12:56.937720 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86bpg\" (UniqueName: \"kubernetes.io/projected/ca439434-1fb9-4a99-8b1a-4f72135ba956-kube-api-access-86bpg\") pod \"redhat-marketplace-bzjxk\" (UID: \"ca439434-1fb9-4a99-8b1a-4f72135ba956\") " pod="openshift-marketplace/redhat-marketplace-bzjxk" Jan 31 04:12:57 crc kubenswrapper[4827]: I0131 04:12:57.081372 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzjxk" Jan 31 04:12:57 crc kubenswrapper[4827]: I0131 04:12:57.574902 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzjxk"] Jan 31 04:12:57 crc kubenswrapper[4827]: I0131 04:12:57.587727 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzjxk" event={"ID":"ca439434-1fb9-4a99-8b1a-4f72135ba956","Type":"ContainerStarted","Data":"081c94c0b6f5ccb0f9ab0c0febfc1630799fef899657e986cfb9838bdabcbeb0"} Jan 31 04:12:58 crc kubenswrapper[4827]: I0131 04:12:58.124190 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:12:58 crc kubenswrapper[4827]: E0131 04:12:58.124824 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:12:58 crc kubenswrapper[4827]: I0131 04:12:58.598779 4827 generic.go:334] "Generic (PLEG): container finished" podID="ca439434-1fb9-4a99-8b1a-4f72135ba956" containerID="f0338daf9623e128d329db5ba852816a6973e495e70426a9d254a7f446d1abfc" exitCode=0 Jan 31 04:12:58 crc kubenswrapper[4827]: I0131 04:12:58.598825 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzjxk" event={"ID":"ca439434-1fb9-4a99-8b1a-4f72135ba956","Type":"ContainerDied","Data":"f0338daf9623e128d329db5ba852816a6973e495e70426a9d254a7f446d1abfc"} Jan 31 04:12:59 crc kubenswrapper[4827]: I0131 04:12:59.608743 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzjxk" event={"ID":"ca439434-1fb9-4a99-8b1a-4f72135ba956","Type":"ContainerStarted","Data":"677a231a87c7b2ed90bbf3aef5c2c1b48d12e63e1fb8617aeb042aed1b0bfd71"} Jan 31 04:13:00 crc kubenswrapper[4827]: I0131 04:13:00.618911 4827 generic.go:334] "Generic (PLEG): container finished" podID="ca439434-1fb9-4a99-8b1a-4f72135ba956" containerID="677a231a87c7b2ed90bbf3aef5c2c1b48d12e63e1fb8617aeb042aed1b0bfd71" exitCode=0 Jan 31 04:13:00 crc kubenswrapper[4827]: I0131 04:13:00.618985 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzjxk" event={"ID":"ca439434-1fb9-4a99-8b1a-4f72135ba956","Type":"ContainerDied","Data":"677a231a87c7b2ed90bbf3aef5c2c1b48d12e63e1fb8617aeb042aed1b0bfd71"} Jan 31 04:13:01 crc kubenswrapper[4827]: I0131 04:13:01.631245 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzjxk" event={"ID":"ca439434-1fb9-4a99-8b1a-4f72135ba956","Type":"ContainerStarted","Data":"a1a53dba87eb40aac8f7a0299b4c274890bd064f2e51fd86bc5e1cd073965915"} Jan 31 04:13:01 crc kubenswrapper[4827]: I0131 04:13:01.659271 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bzjxk" podStartSLOduration=3.245997146 podStartE2EDuration="5.659252879s" podCreationTimestamp="2026-01-31 04:12:56 +0000 UTC" firstStartedPulling="2026-01-31 04:12:58.601161248 +0000 UTC m=+1571.288241717" lastFinishedPulling="2026-01-31 04:13:01.014417001 +0000 UTC m=+1573.701497450" observedRunningTime="2026-01-31 04:13:01.646803838 +0000 UTC m=+1574.333884287" watchObservedRunningTime="2026-01-31 04:13:01.659252879 +0000 UTC m=+1574.346333328" Jan 31 04:13:02 crc kubenswrapper[4827]: I0131 04:13:02.638941 4827 generic.go:334] "Generic (PLEG): container finished" podID="e7919b7b-6239-444c-9da1-8dedcee8ca3e" containerID="08e0116fbf3a38e5da7bc6a5299c294add6dec9fa7bc6b56cfad318252f0235a" exitCode=0 Jan 31 04:13:02 crc kubenswrapper[4827]: I0131 04:13:02.639018 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m" event={"ID":"e7919b7b-6239-444c-9da1-8dedcee8ca3e","Type":"ContainerDied","Data":"08e0116fbf3a38e5da7bc6a5299c294add6dec9fa7bc6b56cfad318252f0235a"} Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.108081 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q64nk"] Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.110700 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q64nk" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.124518 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q64nk"] Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.142295 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.264979 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7919b7b-6239-444c-9da1-8dedcee8ca3e-bootstrap-combined-ca-bundle\") pod \"e7919b7b-6239-444c-9da1-8dedcee8ca3e\" (UID: \"e7919b7b-6239-444c-9da1-8dedcee8ca3e\") " Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.265034 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2cpx\" (UniqueName: \"kubernetes.io/projected/e7919b7b-6239-444c-9da1-8dedcee8ca3e-kube-api-access-j2cpx\") pod \"e7919b7b-6239-444c-9da1-8dedcee8ca3e\" (UID: \"e7919b7b-6239-444c-9da1-8dedcee8ca3e\") " Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.265080 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7919b7b-6239-444c-9da1-8dedcee8ca3e-inventory\") pod \"e7919b7b-6239-444c-9da1-8dedcee8ca3e\" (UID: \"e7919b7b-6239-444c-9da1-8dedcee8ca3e\") " Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.265113 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7919b7b-6239-444c-9da1-8dedcee8ca3e-ssh-key-openstack-edpm-ipam\") pod \"e7919b7b-6239-444c-9da1-8dedcee8ca3e\" (UID: \"e7919b7b-6239-444c-9da1-8dedcee8ca3e\") " Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.265454 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e92acd7-0ac1-4182-8d98-1a540e144fa1-utilities\") pod \"community-operators-q64nk\" (UID: \"8e92acd7-0ac1-4182-8d98-1a540e144fa1\") " pod="openshift-marketplace/community-operators-q64nk" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.265526 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkxtz\" (UniqueName: \"kubernetes.io/projected/8e92acd7-0ac1-4182-8d98-1a540e144fa1-kube-api-access-fkxtz\") pod \"community-operators-q64nk\" (UID: \"8e92acd7-0ac1-4182-8d98-1a540e144fa1\") " pod="openshift-marketplace/community-operators-q64nk" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.265578 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e92acd7-0ac1-4182-8d98-1a540e144fa1-catalog-content\") pod \"community-operators-q64nk\" (UID: \"8e92acd7-0ac1-4182-8d98-1a540e144fa1\") " pod="openshift-marketplace/community-operators-q64nk" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.271104 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7919b7b-6239-444c-9da1-8dedcee8ca3e-kube-api-access-j2cpx" (OuterVolumeSpecName: "kube-api-access-j2cpx") pod "e7919b7b-6239-444c-9da1-8dedcee8ca3e" (UID: "e7919b7b-6239-444c-9da1-8dedcee8ca3e"). InnerVolumeSpecName "kube-api-access-j2cpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.272949 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7919b7b-6239-444c-9da1-8dedcee8ca3e-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e7919b7b-6239-444c-9da1-8dedcee8ca3e" (UID: "e7919b7b-6239-444c-9da1-8dedcee8ca3e"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.303052 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7919b7b-6239-444c-9da1-8dedcee8ca3e-inventory" (OuterVolumeSpecName: "inventory") pod "e7919b7b-6239-444c-9da1-8dedcee8ca3e" (UID: "e7919b7b-6239-444c-9da1-8dedcee8ca3e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.311433 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7919b7b-6239-444c-9da1-8dedcee8ca3e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e7919b7b-6239-444c-9da1-8dedcee8ca3e" (UID: "e7919b7b-6239-444c-9da1-8dedcee8ca3e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.368123 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkxtz\" (UniqueName: \"kubernetes.io/projected/8e92acd7-0ac1-4182-8d98-1a540e144fa1-kube-api-access-fkxtz\") pod \"community-operators-q64nk\" (UID: \"8e92acd7-0ac1-4182-8d98-1a540e144fa1\") " pod="openshift-marketplace/community-operators-q64nk" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.368194 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e92acd7-0ac1-4182-8d98-1a540e144fa1-catalog-content\") pod \"community-operators-q64nk\" (UID: \"8e92acd7-0ac1-4182-8d98-1a540e144fa1\") " pod="openshift-marketplace/community-operators-q64nk" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.368265 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e92acd7-0ac1-4182-8d98-1a540e144fa1-utilities\") pod \"community-operators-q64nk\" (UID: \"8e92acd7-0ac1-4182-8d98-1a540e144fa1\") " pod="openshift-marketplace/community-operators-q64nk" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.368309 4827 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7919b7b-6239-444c-9da1-8dedcee8ca3e-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.368321 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2cpx\" (UniqueName: \"kubernetes.io/projected/e7919b7b-6239-444c-9da1-8dedcee8ca3e-kube-api-access-j2cpx\") on node \"crc\" DevicePath \"\"" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.368331 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7919b7b-6239-444c-9da1-8dedcee8ca3e-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.368339 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7919b7b-6239-444c-9da1-8dedcee8ca3e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.368760 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e92acd7-0ac1-4182-8d98-1a540e144fa1-catalog-content\") pod \"community-operators-q64nk\" (UID: \"8e92acd7-0ac1-4182-8d98-1a540e144fa1\") " pod="openshift-marketplace/community-operators-q64nk" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.368796 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e92acd7-0ac1-4182-8d98-1a540e144fa1-utilities\") pod \"community-operators-q64nk\" (UID: \"8e92acd7-0ac1-4182-8d98-1a540e144fa1\") " pod="openshift-marketplace/community-operators-q64nk" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.385337 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkxtz\" (UniqueName: \"kubernetes.io/projected/8e92acd7-0ac1-4182-8d98-1a540e144fa1-kube-api-access-fkxtz\") pod \"community-operators-q64nk\" (UID: \"8e92acd7-0ac1-4182-8d98-1a540e144fa1\") " pod="openshift-marketplace/community-operators-q64nk" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.468626 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q64nk" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.672390 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m" event={"ID":"e7919b7b-6239-444c-9da1-8dedcee8ca3e","Type":"ContainerDied","Data":"adb5c4695e9a1a21a73fba448e9e19657c9cc2f83420a3770489c8806450f3f4"} Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.672707 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adb5c4695e9a1a21a73fba448e9e19657c9cc2f83420a3770489c8806450f3f4" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.672806 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.772160 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt"] Jan 31 04:13:04 crc kubenswrapper[4827]: E0131 04:13:04.772592 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7919b7b-6239-444c-9da1-8dedcee8ca3e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.772613 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7919b7b-6239-444c-9da1-8dedcee8ca3e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.772810 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7919b7b-6239-444c-9da1-8dedcee8ca3e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.773708 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.776861 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.777107 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.777189 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.777207 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.790834 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt"] Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.879563 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f21965f4-36e7-4c6b-9377-1da6c40e9b02-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt\" (UID: \"f21965f4-36e7-4c6b-9377-1da6c40e9b02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.879673 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f21965f4-36e7-4c6b-9377-1da6c40e9b02-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt\" (UID: \"f21965f4-36e7-4c6b-9377-1da6c40e9b02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.879719 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8mvr\" (UniqueName: \"kubernetes.io/projected/f21965f4-36e7-4c6b-9377-1da6c40e9b02-kube-api-access-l8mvr\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt\" (UID: \"f21965f4-36e7-4c6b-9377-1da6c40e9b02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.981055 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8mvr\" (UniqueName: \"kubernetes.io/projected/f21965f4-36e7-4c6b-9377-1da6c40e9b02-kube-api-access-l8mvr\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt\" (UID: \"f21965f4-36e7-4c6b-9377-1da6c40e9b02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.981262 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f21965f4-36e7-4c6b-9377-1da6c40e9b02-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt\" (UID: \"f21965f4-36e7-4c6b-9377-1da6c40e9b02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.981437 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f21965f4-36e7-4c6b-9377-1da6c40e9b02-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt\" (UID: \"f21965f4-36e7-4c6b-9377-1da6c40e9b02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.986974 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f21965f4-36e7-4c6b-9377-1da6c40e9b02-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt\" (UID: \"f21965f4-36e7-4c6b-9377-1da6c40e9b02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt" Jan 31 04:13:04 crc kubenswrapper[4827]: I0131 04:13:04.989705 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f21965f4-36e7-4c6b-9377-1da6c40e9b02-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt\" (UID: \"f21965f4-36e7-4c6b-9377-1da6c40e9b02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt" Jan 31 04:13:05 crc kubenswrapper[4827]: I0131 04:13:05.004029 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8mvr\" (UniqueName: \"kubernetes.io/projected/f21965f4-36e7-4c6b-9377-1da6c40e9b02-kube-api-access-l8mvr\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt\" (UID: \"f21965f4-36e7-4c6b-9377-1da6c40e9b02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt" Jan 31 04:13:05 crc kubenswrapper[4827]: I0131 04:13:05.086954 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q64nk"] Jan 31 04:13:05 crc kubenswrapper[4827]: I0131 04:13:05.092844 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt" Jan 31 04:13:05 crc kubenswrapper[4827]: I0131 04:13:05.679995 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt"] Jan 31 04:13:05 crc kubenswrapper[4827]: I0131 04:13:05.682171 4827 generic.go:334] "Generic (PLEG): container finished" podID="8e92acd7-0ac1-4182-8d98-1a540e144fa1" containerID="1352839fbb8e05b37a2e20671241559de33d6b3b9876934418cd0450036c3487" exitCode=0 Jan 31 04:13:05 crc kubenswrapper[4827]: I0131 04:13:05.682221 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q64nk" event={"ID":"8e92acd7-0ac1-4182-8d98-1a540e144fa1","Type":"ContainerDied","Data":"1352839fbb8e05b37a2e20671241559de33d6b3b9876934418cd0450036c3487"} Jan 31 04:13:05 crc kubenswrapper[4827]: I0131 04:13:05.682261 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q64nk" event={"ID":"8e92acd7-0ac1-4182-8d98-1a540e144fa1","Type":"ContainerStarted","Data":"b903de051e750380f5d36f579758af778f241ed133427c4cb0f3114370408f15"} Jan 31 04:13:06 crc kubenswrapper[4827]: I0131 04:13:06.694252 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt" event={"ID":"f21965f4-36e7-4c6b-9377-1da6c40e9b02","Type":"ContainerStarted","Data":"942d5457a8b3e345b742d4e0acf06045c9338756c7cf5d7f38960fd1ee91adfa"} Jan 31 04:13:06 crc kubenswrapper[4827]: I0131 04:13:06.696109 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt" event={"ID":"f21965f4-36e7-4c6b-9377-1da6c40e9b02","Type":"ContainerStarted","Data":"03f840da5926741fbd11139a28b392629eba1046a8b9f628eceebbafd9e7eaa0"} Jan 31 04:13:06 crc kubenswrapper[4827]: I0131 04:13:06.697670 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q64nk" event={"ID":"8e92acd7-0ac1-4182-8d98-1a540e144fa1","Type":"ContainerStarted","Data":"0ba59c06be6a990fd33154f558be490b18742c21c2651753e584498c3653ed15"} Jan 31 04:13:06 crc kubenswrapper[4827]: I0131 04:13:06.719966 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt" podStartSLOduration=2.317939393 podStartE2EDuration="2.719949051s" podCreationTimestamp="2026-01-31 04:13:04 +0000 UTC" firstStartedPulling="2026-01-31 04:13:05.695195142 +0000 UTC m=+1578.382275601" lastFinishedPulling="2026-01-31 04:13:06.09720477 +0000 UTC m=+1578.784285259" observedRunningTime="2026-01-31 04:13:06.715454264 +0000 UTC m=+1579.402534753" watchObservedRunningTime="2026-01-31 04:13:06.719949051 +0000 UTC m=+1579.407029500" Jan 31 04:13:07 crc kubenswrapper[4827]: I0131 04:13:07.081456 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bzjxk" Jan 31 04:13:07 crc kubenswrapper[4827]: I0131 04:13:07.081821 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bzjxk" Jan 31 04:13:07 crc kubenswrapper[4827]: I0131 04:13:07.130070 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bzjxk" Jan 31 04:13:07 crc kubenswrapper[4827]: I0131 04:13:07.708241 4827 generic.go:334] "Generic (PLEG): container finished" podID="8e92acd7-0ac1-4182-8d98-1a540e144fa1" containerID="0ba59c06be6a990fd33154f558be490b18742c21c2651753e584498c3653ed15" exitCode=0 Jan 31 04:13:07 crc kubenswrapper[4827]: I0131 04:13:07.709155 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q64nk" event={"ID":"8e92acd7-0ac1-4182-8d98-1a540e144fa1","Type":"ContainerDied","Data":"0ba59c06be6a990fd33154f558be490b18742c21c2651753e584498c3653ed15"} Jan 31 04:13:07 crc kubenswrapper[4827]: I0131 04:13:07.771269 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bzjxk" Jan 31 04:13:08 crc kubenswrapper[4827]: I0131 04:13:08.717795 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q64nk" event={"ID":"8e92acd7-0ac1-4182-8d98-1a540e144fa1","Type":"ContainerStarted","Data":"0f1db947456e47b6f948fad3379f772d9d1b22bca06ea7fe56f8dc896a91b626"} Jan 31 04:13:08 crc kubenswrapper[4827]: I0131 04:13:08.739828 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q64nk" podStartSLOduration=2.027674199 podStartE2EDuration="4.739746889s" podCreationTimestamp="2026-01-31 04:13:04 +0000 UTC" firstStartedPulling="2026-01-31 04:13:05.686066852 +0000 UTC m=+1578.373147311" lastFinishedPulling="2026-01-31 04:13:08.398139552 +0000 UTC m=+1581.085220001" observedRunningTime="2026-01-31 04:13:08.733224729 +0000 UTC m=+1581.420305188" watchObservedRunningTime="2026-01-31 04:13:08.739746889 +0000 UTC m=+1581.426827358" Jan 31 04:13:09 crc kubenswrapper[4827]: I0131 04:13:09.482442 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzjxk"] Jan 31 04:13:09 crc kubenswrapper[4827]: I0131 04:13:09.726999 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bzjxk" podUID="ca439434-1fb9-4a99-8b1a-4f72135ba956" containerName="registry-server" containerID="cri-o://a1a53dba87eb40aac8f7a0299b4c274890bd064f2e51fd86bc5e1cd073965915" gracePeriod=2 Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.202555 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzjxk" Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.389849 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca439434-1fb9-4a99-8b1a-4f72135ba956-catalog-content\") pod \"ca439434-1fb9-4a99-8b1a-4f72135ba956\" (UID: \"ca439434-1fb9-4a99-8b1a-4f72135ba956\") " Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.389900 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca439434-1fb9-4a99-8b1a-4f72135ba956-utilities\") pod \"ca439434-1fb9-4a99-8b1a-4f72135ba956\" (UID: \"ca439434-1fb9-4a99-8b1a-4f72135ba956\") " Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.389969 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86bpg\" (UniqueName: \"kubernetes.io/projected/ca439434-1fb9-4a99-8b1a-4f72135ba956-kube-api-access-86bpg\") pod \"ca439434-1fb9-4a99-8b1a-4f72135ba956\" (UID: \"ca439434-1fb9-4a99-8b1a-4f72135ba956\") " Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.390902 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca439434-1fb9-4a99-8b1a-4f72135ba956-utilities" (OuterVolumeSpecName: "utilities") pod "ca439434-1fb9-4a99-8b1a-4f72135ba956" (UID: "ca439434-1fb9-4a99-8b1a-4f72135ba956"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.396089 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca439434-1fb9-4a99-8b1a-4f72135ba956-kube-api-access-86bpg" (OuterVolumeSpecName: "kube-api-access-86bpg") pod "ca439434-1fb9-4a99-8b1a-4f72135ba956" (UID: "ca439434-1fb9-4a99-8b1a-4f72135ba956"). InnerVolumeSpecName "kube-api-access-86bpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.414962 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca439434-1fb9-4a99-8b1a-4f72135ba956-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca439434-1fb9-4a99-8b1a-4f72135ba956" (UID: "ca439434-1fb9-4a99-8b1a-4f72135ba956"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.492205 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86bpg\" (UniqueName: \"kubernetes.io/projected/ca439434-1fb9-4a99-8b1a-4f72135ba956-kube-api-access-86bpg\") on node \"crc\" DevicePath \"\"" Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.492576 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca439434-1fb9-4a99-8b1a-4f72135ba956-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.492590 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca439434-1fb9-4a99-8b1a-4f72135ba956-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.740839 4827 generic.go:334] "Generic (PLEG): container finished" podID="ca439434-1fb9-4a99-8b1a-4f72135ba956" containerID="a1a53dba87eb40aac8f7a0299b4c274890bd064f2e51fd86bc5e1cd073965915" exitCode=0 Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.740918 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzjxk" event={"ID":"ca439434-1fb9-4a99-8b1a-4f72135ba956","Type":"ContainerDied","Data":"a1a53dba87eb40aac8f7a0299b4c274890bd064f2e51fd86bc5e1cd073965915"} Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.740980 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzjxk" event={"ID":"ca439434-1fb9-4a99-8b1a-4f72135ba956","Type":"ContainerDied","Data":"081c94c0b6f5ccb0f9ab0c0febfc1630799fef899657e986cfb9838bdabcbeb0"} Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.741006 4827 scope.go:117] "RemoveContainer" containerID="a1a53dba87eb40aac8f7a0299b4c274890bd064f2e51fd86bc5e1cd073965915" Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.742478 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzjxk" Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.763021 4827 scope.go:117] "RemoveContainer" containerID="677a231a87c7b2ed90bbf3aef5c2c1b48d12e63e1fb8617aeb042aed1b0bfd71" Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.792050 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzjxk"] Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.805671 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzjxk"] Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.806349 4827 scope.go:117] "RemoveContainer" containerID="f0338daf9623e128d329db5ba852816a6973e495e70426a9d254a7f446d1abfc" Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.846726 4827 scope.go:117] "RemoveContainer" containerID="a1a53dba87eb40aac8f7a0299b4c274890bd064f2e51fd86bc5e1cd073965915" Jan 31 04:13:10 crc kubenswrapper[4827]: E0131 04:13:10.847328 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1a53dba87eb40aac8f7a0299b4c274890bd064f2e51fd86bc5e1cd073965915\": container with ID starting with a1a53dba87eb40aac8f7a0299b4c274890bd064f2e51fd86bc5e1cd073965915 not found: ID does not exist" containerID="a1a53dba87eb40aac8f7a0299b4c274890bd064f2e51fd86bc5e1cd073965915" Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.847360 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1a53dba87eb40aac8f7a0299b4c274890bd064f2e51fd86bc5e1cd073965915"} err="failed to get container status \"a1a53dba87eb40aac8f7a0299b4c274890bd064f2e51fd86bc5e1cd073965915\": rpc error: code = NotFound desc = could not find container \"a1a53dba87eb40aac8f7a0299b4c274890bd064f2e51fd86bc5e1cd073965915\": container with ID starting with a1a53dba87eb40aac8f7a0299b4c274890bd064f2e51fd86bc5e1cd073965915 not found: ID does not exist" Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.847379 4827 scope.go:117] "RemoveContainer" containerID="677a231a87c7b2ed90bbf3aef5c2c1b48d12e63e1fb8617aeb042aed1b0bfd71" Jan 31 04:13:10 crc kubenswrapper[4827]: E0131 04:13:10.847624 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"677a231a87c7b2ed90bbf3aef5c2c1b48d12e63e1fb8617aeb042aed1b0bfd71\": container with ID starting with 677a231a87c7b2ed90bbf3aef5c2c1b48d12e63e1fb8617aeb042aed1b0bfd71 not found: ID does not exist" containerID="677a231a87c7b2ed90bbf3aef5c2c1b48d12e63e1fb8617aeb042aed1b0bfd71" Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.847661 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"677a231a87c7b2ed90bbf3aef5c2c1b48d12e63e1fb8617aeb042aed1b0bfd71"} err="failed to get container status \"677a231a87c7b2ed90bbf3aef5c2c1b48d12e63e1fb8617aeb042aed1b0bfd71\": rpc error: code = NotFound desc = could not find container \"677a231a87c7b2ed90bbf3aef5c2c1b48d12e63e1fb8617aeb042aed1b0bfd71\": container with ID starting with 677a231a87c7b2ed90bbf3aef5c2c1b48d12e63e1fb8617aeb042aed1b0bfd71 not found: ID does not exist" Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.847681 4827 scope.go:117] "RemoveContainer" containerID="f0338daf9623e128d329db5ba852816a6973e495e70426a9d254a7f446d1abfc" Jan 31 04:13:10 crc kubenswrapper[4827]: E0131 04:13:10.848227 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0338daf9623e128d329db5ba852816a6973e495e70426a9d254a7f446d1abfc\": container with ID starting with f0338daf9623e128d329db5ba852816a6973e495e70426a9d254a7f446d1abfc not found: ID does not exist" containerID="f0338daf9623e128d329db5ba852816a6973e495e70426a9d254a7f446d1abfc" Jan 31 04:13:10 crc kubenswrapper[4827]: I0131 04:13:10.848308 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0338daf9623e128d329db5ba852816a6973e495e70426a9d254a7f446d1abfc"} err="failed to get container status \"f0338daf9623e128d329db5ba852816a6973e495e70426a9d254a7f446d1abfc\": rpc error: code = NotFound desc = could not find container \"f0338daf9623e128d329db5ba852816a6973e495e70426a9d254a7f446d1abfc\": container with ID starting with f0338daf9623e128d329db5ba852816a6973e495e70426a9d254a7f446d1abfc not found: ID does not exist" Jan 31 04:13:11 crc kubenswrapper[4827]: I0131 04:13:11.110633 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:13:11 crc kubenswrapper[4827]: E0131 04:13:11.111003 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:13:12 crc kubenswrapper[4827]: I0131 04:13:12.122007 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca439434-1fb9-4a99-8b1a-4f72135ba956" path="/var/lib/kubelet/pods/ca439434-1fb9-4a99-8b1a-4f72135ba956/volumes" Jan 31 04:13:14 crc kubenswrapper[4827]: I0131 04:13:14.468924 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q64nk" Jan 31 04:13:14 crc kubenswrapper[4827]: I0131 04:13:14.469521 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q64nk" Jan 31 04:13:14 crc kubenswrapper[4827]: I0131 04:13:14.527796 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q64nk" Jan 31 04:13:14 crc kubenswrapper[4827]: I0131 04:13:14.828172 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q64nk" Jan 31 04:13:14 crc kubenswrapper[4827]: I0131 04:13:14.892354 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q64nk"] Jan 31 04:13:16 crc kubenswrapper[4827]: I0131 04:13:16.803383 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q64nk" podUID="8e92acd7-0ac1-4182-8d98-1a540e144fa1" containerName="registry-server" containerID="cri-o://0f1db947456e47b6f948fad3379f772d9d1b22bca06ea7fe56f8dc896a91b626" gracePeriod=2 Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.192783 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p9mnx"] Jan 31 04:13:17 crc kubenswrapper[4827]: E0131 04:13:17.193287 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca439434-1fb9-4a99-8b1a-4f72135ba956" containerName="registry-server" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.193311 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca439434-1fb9-4a99-8b1a-4f72135ba956" containerName="registry-server" Jan 31 04:13:17 crc kubenswrapper[4827]: E0131 04:13:17.193326 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca439434-1fb9-4a99-8b1a-4f72135ba956" containerName="extract-content" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.193335 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca439434-1fb9-4a99-8b1a-4f72135ba956" containerName="extract-content" Jan 31 04:13:17 crc kubenswrapper[4827]: E0131 04:13:17.193352 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca439434-1fb9-4a99-8b1a-4f72135ba956" containerName="extract-utilities" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.193359 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca439434-1fb9-4a99-8b1a-4f72135ba956" containerName="extract-utilities" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.193586 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca439434-1fb9-4a99-8b1a-4f72135ba956" containerName="registry-server" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.195161 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9mnx" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.215128 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p9mnx"] Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.318330 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bh84\" (UniqueName: \"kubernetes.io/projected/a985eee9-b75b-499b-bbdb-fb1f3437ff77-kube-api-access-9bh84\") pod \"certified-operators-p9mnx\" (UID: \"a985eee9-b75b-499b-bbdb-fb1f3437ff77\") " pod="openshift-marketplace/certified-operators-p9mnx" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.318468 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a985eee9-b75b-499b-bbdb-fb1f3437ff77-utilities\") pod \"certified-operators-p9mnx\" (UID: \"a985eee9-b75b-499b-bbdb-fb1f3437ff77\") " pod="openshift-marketplace/certified-operators-p9mnx" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.318578 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a985eee9-b75b-499b-bbdb-fb1f3437ff77-catalog-content\") pod \"certified-operators-p9mnx\" (UID: \"a985eee9-b75b-499b-bbdb-fb1f3437ff77\") " pod="openshift-marketplace/certified-operators-p9mnx" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.420379 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a985eee9-b75b-499b-bbdb-fb1f3437ff77-catalog-content\") pod \"certified-operators-p9mnx\" (UID: \"a985eee9-b75b-499b-bbdb-fb1f3437ff77\") " pod="openshift-marketplace/certified-operators-p9mnx" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.420505 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bh84\" (UniqueName: \"kubernetes.io/projected/a985eee9-b75b-499b-bbdb-fb1f3437ff77-kube-api-access-9bh84\") pod \"certified-operators-p9mnx\" (UID: \"a985eee9-b75b-499b-bbdb-fb1f3437ff77\") " pod="openshift-marketplace/certified-operators-p9mnx" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.420590 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a985eee9-b75b-499b-bbdb-fb1f3437ff77-utilities\") pod \"certified-operators-p9mnx\" (UID: \"a985eee9-b75b-499b-bbdb-fb1f3437ff77\") " pod="openshift-marketplace/certified-operators-p9mnx" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.421124 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a985eee9-b75b-499b-bbdb-fb1f3437ff77-utilities\") pod \"certified-operators-p9mnx\" (UID: \"a985eee9-b75b-499b-bbdb-fb1f3437ff77\") " pod="openshift-marketplace/certified-operators-p9mnx" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.421387 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a985eee9-b75b-499b-bbdb-fb1f3437ff77-catalog-content\") pod \"certified-operators-p9mnx\" (UID: \"a985eee9-b75b-499b-bbdb-fb1f3437ff77\") " pod="openshift-marketplace/certified-operators-p9mnx" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.454184 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bh84\" (UniqueName: \"kubernetes.io/projected/a985eee9-b75b-499b-bbdb-fb1f3437ff77-kube-api-access-9bh84\") pod \"certified-operators-p9mnx\" (UID: \"a985eee9-b75b-499b-bbdb-fb1f3437ff77\") " pod="openshift-marketplace/certified-operators-p9mnx" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.552094 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9mnx" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.752033 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q64nk" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.814714 4827 generic.go:334] "Generic (PLEG): container finished" podID="8e92acd7-0ac1-4182-8d98-1a540e144fa1" containerID="0f1db947456e47b6f948fad3379f772d9d1b22bca06ea7fe56f8dc896a91b626" exitCode=0 Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.814756 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q64nk" event={"ID":"8e92acd7-0ac1-4182-8d98-1a540e144fa1","Type":"ContainerDied","Data":"0f1db947456e47b6f948fad3379f772d9d1b22bca06ea7fe56f8dc896a91b626"} Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.814783 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q64nk" event={"ID":"8e92acd7-0ac1-4182-8d98-1a540e144fa1","Type":"ContainerDied","Data":"b903de051e750380f5d36f579758af778f241ed133427c4cb0f3114370408f15"} Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.814819 4827 scope.go:117] "RemoveContainer" containerID="0f1db947456e47b6f948fad3379f772d9d1b22bca06ea7fe56f8dc896a91b626" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.814953 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q64nk" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.833196 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkxtz\" (UniqueName: \"kubernetes.io/projected/8e92acd7-0ac1-4182-8d98-1a540e144fa1-kube-api-access-fkxtz\") pod \"8e92acd7-0ac1-4182-8d98-1a540e144fa1\" (UID: \"8e92acd7-0ac1-4182-8d98-1a540e144fa1\") " Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.833301 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e92acd7-0ac1-4182-8d98-1a540e144fa1-catalog-content\") pod \"8e92acd7-0ac1-4182-8d98-1a540e144fa1\" (UID: \"8e92acd7-0ac1-4182-8d98-1a540e144fa1\") " Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.833417 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e92acd7-0ac1-4182-8d98-1a540e144fa1-utilities\") pod \"8e92acd7-0ac1-4182-8d98-1a540e144fa1\" (UID: \"8e92acd7-0ac1-4182-8d98-1a540e144fa1\") " Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.835044 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e92acd7-0ac1-4182-8d98-1a540e144fa1-utilities" (OuterVolumeSpecName: "utilities") pod "8e92acd7-0ac1-4182-8d98-1a540e144fa1" (UID: "8e92acd7-0ac1-4182-8d98-1a540e144fa1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.843167 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e92acd7-0ac1-4182-8d98-1a540e144fa1-kube-api-access-fkxtz" (OuterVolumeSpecName: "kube-api-access-fkxtz") pod "8e92acd7-0ac1-4182-8d98-1a540e144fa1" (UID: "8e92acd7-0ac1-4182-8d98-1a540e144fa1"). InnerVolumeSpecName "kube-api-access-fkxtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.846816 4827 scope.go:117] "RemoveContainer" containerID="0ba59c06be6a990fd33154f558be490b18742c21c2651753e584498c3653ed15" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.914359 4827 scope.go:117] "RemoveContainer" containerID="1352839fbb8e05b37a2e20671241559de33d6b3b9876934418cd0450036c3487" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.933508 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e92acd7-0ac1-4182-8d98-1a540e144fa1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e92acd7-0ac1-4182-8d98-1a540e144fa1" (UID: "8e92acd7-0ac1-4182-8d98-1a540e144fa1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.935152 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e92acd7-0ac1-4182-8d98-1a540e144fa1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.935176 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e92acd7-0ac1-4182-8d98-1a540e144fa1-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.935186 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkxtz\" (UniqueName: \"kubernetes.io/projected/8e92acd7-0ac1-4182-8d98-1a540e144fa1-kube-api-access-fkxtz\") on node \"crc\" DevicePath \"\"" Jan 31 04:13:17 crc kubenswrapper[4827]: I0131 04:13:17.993182 4827 scope.go:117] "RemoveContainer" containerID="0f1db947456e47b6f948fad3379f772d9d1b22bca06ea7fe56f8dc896a91b626" Jan 31 04:13:18 crc kubenswrapper[4827]: E0131 04:13:18.004264 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f1db947456e47b6f948fad3379f772d9d1b22bca06ea7fe56f8dc896a91b626\": container with ID starting with 0f1db947456e47b6f948fad3379f772d9d1b22bca06ea7fe56f8dc896a91b626 not found: ID does not exist" containerID="0f1db947456e47b6f948fad3379f772d9d1b22bca06ea7fe56f8dc896a91b626" Jan 31 04:13:18 crc kubenswrapper[4827]: I0131 04:13:18.004310 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f1db947456e47b6f948fad3379f772d9d1b22bca06ea7fe56f8dc896a91b626"} err="failed to get container status \"0f1db947456e47b6f948fad3379f772d9d1b22bca06ea7fe56f8dc896a91b626\": rpc error: code = NotFound desc = could not find container \"0f1db947456e47b6f948fad3379f772d9d1b22bca06ea7fe56f8dc896a91b626\": container with ID starting with 0f1db947456e47b6f948fad3379f772d9d1b22bca06ea7fe56f8dc896a91b626 not found: ID does not exist" Jan 31 04:13:18 crc kubenswrapper[4827]: I0131 04:13:18.004334 4827 scope.go:117] "RemoveContainer" containerID="0ba59c06be6a990fd33154f558be490b18742c21c2651753e584498c3653ed15" Jan 31 04:13:18 crc kubenswrapper[4827]: E0131 04:13:18.007293 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ba59c06be6a990fd33154f558be490b18742c21c2651753e584498c3653ed15\": container with ID starting with 0ba59c06be6a990fd33154f558be490b18742c21c2651753e584498c3653ed15 not found: ID does not exist" containerID="0ba59c06be6a990fd33154f558be490b18742c21c2651753e584498c3653ed15" Jan 31 04:13:18 crc kubenswrapper[4827]: I0131 04:13:18.007330 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ba59c06be6a990fd33154f558be490b18742c21c2651753e584498c3653ed15"} err="failed to get container status \"0ba59c06be6a990fd33154f558be490b18742c21c2651753e584498c3653ed15\": rpc error: code = NotFound desc = could not find container \"0ba59c06be6a990fd33154f558be490b18742c21c2651753e584498c3653ed15\": container with ID starting with 0ba59c06be6a990fd33154f558be490b18742c21c2651753e584498c3653ed15 not found: ID does not exist" Jan 31 04:13:18 crc kubenswrapper[4827]: I0131 04:13:18.007356 4827 scope.go:117] "RemoveContainer" containerID="1352839fbb8e05b37a2e20671241559de33d6b3b9876934418cd0450036c3487" Jan 31 04:13:18 crc kubenswrapper[4827]: E0131 04:13:18.007815 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1352839fbb8e05b37a2e20671241559de33d6b3b9876934418cd0450036c3487\": container with ID starting with 1352839fbb8e05b37a2e20671241559de33d6b3b9876934418cd0450036c3487 not found: ID does not exist" containerID="1352839fbb8e05b37a2e20671241559de33d6b3b9876934418cd0450036c3487" Jan 31 04:13:18 crc kubenswrapper[4827]: I0131 04:13:18.007845 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1352839fbb8e05b37a2e20671241559de33d6b3b9876934418cd0450036c3487"} err="failed to get container status \"1352839fbb8e05b37a2e20671241559de33d6b3b9876934418cd0450036c3487\": rpc error: code = NotFound desc = could not find container \"1352839fbb8e05b37a2e20671241559de33d6b3b9876934418cd0450036c3487\": container with ID starting with 1352839fbb8e05b37a2e20671241559de33d6b3b9876934418cd0450036c3487 not found: ID does not exist" Jan 31 04:13:18 crc kubenswrapper[4827]: I0131 04:13:18.098075 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p9mnx"] Jan 31 04:13:18 crc kubenswrapper[4827]: I0131 04:13:18.171858 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q64nk"] Jan 31 04:13:18 crc kubenswrapper[4827]: I0131 04:13:18.179566 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q64nk"] Jan 31 04:13:18 crc kubenswrapper[4827]: I0131 04:13:18.828123 4827 generic.go:334] "Generic (PLEG): container finished" podID="a985eee9-b75b-499b-bbdb-fb1f3437ff77" containerID="e4cc5f5467a2064ca899c8ddb2bf7dde2fbf9e622ef5576495e163eea624af92" exitCode=0 Jan 31 04:13:18 crc kubenswrapper[4827]: I0131 04:13:18.828272 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9mnx" event={"ID":"a985eee9-b75b-499b-bbdb-fb1f3437ff77","Type":"ContainerDied","Data":"e4cc5f5467a2064ca899c8ddb2bf7dde2fbf9e622ef5576495e163eea624af92"} Jan 31 04:13:18 crc kubenswrapper[4827]: I0131 04:13:18.828325 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9mnx" event={"ID":"a985eee9-b75b-499b-bbdb-fb1f3437ff77","Type":"ContainerStarted","Data":"2052feeddf2a89da6fce49ec90392379d3320b573355985af0b43976638f0db8"} Jan 31 04:13:18 crc kubenswrapper[4827]: I0131 04:13:18.831067 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:13:20 crc kubenswrapper[4827]: I0131 04:13:20.122011 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e92acd7-0ac1-4182-8d98-1a540e144fa1" path="/var/lib/kubelet/pods/8e92acd7-0ac1-4182-8d98-1a540e144fa1/volumes" Jan 31 04:13:22 crc kubenswrapper[4827]: I0131 04:13:22.110614 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:13:22 crc kubenswrapper[4827]: E0131 04:13:22.111200 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:13:23 crc kubenswrapper[4827]: I0131 04:13:23.871050 4827 generic.go:334] "Generic (PLEG): container finished" podID="a985eee9-b75b-499b-bbdb-fb1f3437ff77" containerID="eb06ef031ed1d8a88b89d1e49b2588e2d00bfb6fb0eb8f84bd9902539289d0cd" exitCode=0 Jan 31 04:13:23 crc kubenswrapper[4827]: I0131 04:13:23.871086 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9mnx" event={"ID":"a985eee9-b75b-499b-bbdb-fb1f3437ff77","Type":"ContainerDied","Data":"eb06ef031ed1d8a88b89d1e49b2588e2d00bfb6fb0eb8f84bd9902539289d0cd"} Jan 31 04:13:24 crc kubenswrapper[4827]: I0131 04:13:24.883216 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9mnx" event={"ID":"a985eee9-b75b-499b-bbdb-fb1f3437ff77","Type":"ContainerStarted","Data":"8ca107cfae822c0a7069024cfd6366d84885bab139063e1041916bdabb508a74"} Jan 31 04:13:24 crc kubenswrapper[4827]: I0131 04:13:24.901874 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p9mnx" podStartSLOduration=2.241233468 podStartE2EDuration="7.901854232s" podCreationTimestamp="2026-01-31 04:13:17 +0000 UTC" firstStartedPulling="2026-01-31 04:13:18.830663259 +0000 UTC m=+1591.517743718" lastFinishedPulling="2026-01-31 04:13:24.491284033 +0000 UTC m=+1597.178364482" observedRunningTime="2026-01-31 04:13:24.899610304 +0000 UTC m=+1597.586690763" watchObservedRunningTime="2026-01-31 04:13:24.901854232 +0000 UTC m=+1597.588934701" Jan 31 04:13:27 crc kubenswrapper[4827]: I0131 04:13:27.552261 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p9mnx" Jan 31 04:13:27 crc kubenswrapper[4827]: I0131 04:13:27.552599 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p9mnx" Jan 31 04:13:27 crc kubenswrapper[4827]: I0131 04:13:27.606034 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p9mnx" Jan 31 04:13:33 crc kubenswrapper[4827]: I0131 04:13:33.110282 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:13:33 crc kubenswrapper[4827]: E0131 04:13:33.112310 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:13:37 crc kubenswrapper[4827]: I0131 04:13:37.602250 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p9mnx" Jan 31 04:13:37 crc kubenswrapper[4827]: I0131 04:13:37.692489 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p9mnx"] Jan 31 04:13:37 crc kubenswrapper[4827]: I0131 04:13:37.752062 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7krl2"] Jan 31 04:13:37 crc kubenswrapper[4827]: I0131 04:13:37.752348 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7krl2" podUID="c69e48aa-b820-4027-a322-cca18339d441" containerName="registry-server" containerID="cri-o://695894ed778b91528d1c5a08dc133882bc388a190e2514a0449c11591ecdcf87" gracePeriod=2 Jan 31 04:13:38 crc kubenswrapper[4827]: I0131 04:13:38.012094 4827 generic.go:334] "Generic (PLEG): container finished" podID="c69e48aa-b820-4027-a322-cca18339d441" containerID="695894ed778b91528d1c5a08dc133882bc388a190e2514a0449c11591ecdcf87" exitCode=0 Jan 31 04:13:38 crc kubenswrapper[4827]: I0131 04:13:38.012193 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7krl2" event={"ID":"c69e48aa-b820-4027-a322-cca18339d441","Type":"ContainerDied","Data":"695894ed778b91528d1c5a08dc133882bc388a190e2514a0449c11591ecdcf87"} Jan 31 04:13:38 crc kubenswrapper[4827]: I0131 04:13:38.209736 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7krl2" Jan 31 04:13:38 crc kubenswrapper[4827]: I0131 04:13:38.351554 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c69e48aa-b820-4027-a322-cca18339d441-catalog-content\") pod \"c69e48aa-b820-4027-a322-cca18339d441\" (UID: \"c69e48aa-b820-4027-a322-cca18339d441\") " Jan 31 04:13:38 crc kubenswrapper[4827]: I0131 04:13:38.351642 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg8lc\" (UniqueName: \"kubernetes.io/projected/c69e48aa-b820-4027-a322-cca18339d441-kube-api-access-mg8lc\") pod \"c69e48aa-b820-4027-a322-cca18339d441\" (UID: \"c69e48aa-b820-4027-a322-cca18339d441\") " Jan 31 04:13:38 crc kubenswrapper[4827]: I0131 04:13:38.351745 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c69e48aa-b820-4027-a322-cca18339d441-utilities\") pod \"c69e48aa-b820-4027-a322-cca18339d441\" (UID: \"c69e48aa-b820-4027-a322-cca18339d441\") " Jan 31 04:13:38 crc kubenswrapper[4827]: I0131 04:13:38.352202 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c69e48aa-b820-4027-a322-cca18339d441-utilities" (OuterVolumeSpecName: "utilities") pod "c69e48aa-b820-4027-a322-cca18339d441" (UID: "c69e48aa-b820-4027-a322-cca18339d441"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:13:38 crc kubenswrapper[4827]: I0131 04:13:38.375076 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c69e48aa-b820-4027-a322-cca18339d441-kube-api-access-mg8lc" (OuterVolumeSpecName: "kube-api-access-mg8lc") pod "c69e48aa-b820-4027-a322-cca18339d441" (UID: "c69e48aa-b820-4027-a322-cca18339d441"). InnerVolumeSpecName "kube-api-access-mg8lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:13:38 crc kubenswrapper[4827]: I0131 04:13:38.399783 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c69e48aa-b820-4027-a322-cca18339d441-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c69e48aa-b820-4027-a322-cca18339d441" (UID: "c69e48aa-b820-4027-a322-cca18339d441"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:13:38 crc kubenswrapper[4827]: I0131 04:13:38.453911 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg8lc\" (UniqueName: \"kubernetes.io/projected/c69e48aa-b820-4027-a322-cca18339d441-kube-api-access-mg8lc\") on node \"crc\" DevicePath \"\"" Jan 31 04:13:38 crc kubenswrapper[4827]: I0131 04:13:38.453959 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c69e48aa-b820-4027-a322-cca18339d441-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:13:38 crc kubenswrapper[4827]: I0131 04:13:38.453969 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c69e48aa-b820-4027-a322-cca18339d441-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:13:39 crc kubenswrapper[4827]: I0131 04:13:39.025313 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7krl2" event={"ID":"c69e48aa-b820-4027-a322-cca18339d441","Type":"ContainerDied","Data":"13818c96adc9fd19a3cb1411d4e58256d45c4840aa578263ce0aa36dc87b8888"} Jan 31 04:13:39 crc kubenswrapper[4827]: I0131 04:13:39.025366 4827 scope.go:117] "RemoveContainer" containerID="695894ed778b91528d1c5a08dc133882bc388a190e2514a0449c11591ecdcf87" Jan 31 04:13:39 crc kubenswrapper[4827]: I0131 04:13:39.025388 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7krl2" Jan 31 04:13:39 crc kubenswrapper[4827]: I0131 04:13:39.046719 4827 scope.go:117] "RemoveContainer" containerID="c7d63976c9c9acd00b00c3522e5299a0620f10abfebbccdb1ab754915e5efd4b" Jan 31 04:13:39 crc kubenswrapper[4827]: I0131 04:13:39.061775 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7krl2"] Jan 31 04:13:39 crc kubenswrapper[4827]: I0131 04:13:39.072440 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7krl2"] Jan 31 04:13:39 crc kubenswrapper[4827]: I0131 04:13:39.078261 4827 scope.go:117] "RemoveContainer" containerID="e14c298674c6b09dcf1354c109f8af56f600a5f54d459b67f9c7cd7808365298" Jan 31 04:13:40 crc kubenswrapper[4827]: I0131 04:13:40.120813 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c69e48aa-b820-4027-a322-cca18339d441" path="/var/lib/kubelet/pods/c69e48aa-b820-4027-a322-cca18339d441/volumes" Jan 31 04:13:48 crc kubenswrapper[4827]: I0131 04:13:48.128186 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:13:48 crc kubenswrapper[4827]: E0131 04:13:48.129230 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:13:58 crc kubenswrapper[4827]: I0131 04:13:58.890365 4827 scope.go:117] "RemoveContainer" containerID="41842c647403bf3daf7553ad05bbea2c6e6089ce25427a39e08cce7703e1182b" Jan 31 04:13:58 crc kubenswrapper[4827]: I0131 04:13:58.932030 4827 scope.go:117] "RemoveContainer" containerID="0000d2a727f5c592650e72974494ea3f13ed2562b2db4b38735bce7eab44e396" Jan 31 04:13:59 crc kubenswrapper[4827]: I0131 04:13:59.003184 4827 scope.go:117] "RemoveContainer" containerID="7dce2b51248bd789b174a40f27675b78519c1b3ad09660a2bbe51d2525bea123" Jan 31 04:13:59 crc kubenswrapper[4827]: I0131 04:13:59.034906 4827 scope.go:117] "RemoveContainer" containerID="ed3bc2156673f4df424ac5ea4cea74bb4d691cb3bc4a146b0e7d40f6c072e277" Jan 31 04:13:59 crc kubenswrapper[4827]: I0131 04:13:59.112441 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:13:59 crc kubenswrapper[4827]: E0131 04:13:59.112954 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:14:12 crc kubenswrapper[4827]: I0131 04:14:12.110950 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:14:12 crc kubenswrapper[4827]: E0131 04:14:12.112352 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:14:14 crc kubenswrapper[4827]: I0131 04:14:14.061613 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ff35-account-create-update-bcgxx"] Jan 31 04:14:14 crc kubenswrapper[4827]: I0131 04:14:14.070513 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-st2vd"] Jan 31 04:14:14 crc kubenswrapper[4827]: I0131 04:14:14.076914 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5b1a-account-create-update-2dfbx"] Jan 31 04:14:14 crc kubenswrapper[4827]: I0131 04:14:14.103199 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ff35-account-create-update-bcgxx"] Jan 31 04:14:14 crc kubenswrapper[4827]: I0131 04:14:14.124626 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b95b46e8-a0b5-4916-a724-41f7f25f0cd3" path="/var/lib/kubelet/pods/b95b46e8-a0b5-4916-a724-41f7f25f0cd3/volumes" Jan 31 04:14:14 crc kubenswrapper[4827]: I0131 04:14:14.125191 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5b1a-account-create-update-2dfbx"] Jan 31 04:14:14 crc kubenswrapper[4827]: I0131 04:14:14.125220 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-st2vd"] Jan 31 04:14:14 crc kubenswrapper[4827]: I0131 04:14:14.128653 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d367-account-create-update-9kflk"] Jan 31 04:14:14 crc kubenswrapper[4827]: I0131 04:14:14.135801 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-k79vw"] Jan 31 04:14:14 crc kubenswrapper[4827]: I0131 04:14:14.142332 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-k79vw"] Jan 31 04:14:14 crc kubenswrapper[4827]: I0131 04:14:14.148946 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d367-account-create-update-9kflk"] Jan 31 04:14:14 crc kubenswrapper[4827]: I0131 04:14:14.155208 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-787wv"] Jan 31 04:14:14 crc kubenswrapper[4827]: I0131 04:14:14.162402 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-787wv"] Jan 31 04:14:14 crc kubenswrapper[4827]: I0131 04:14:14.383138 4827 generic.go:334] "Generic (PLEG): container finished" podID="f21965f4-36e7-4c6b-9377-1da6c40e9b02" containerID="942d5457a8b3e345b742d4e0acf06045c9338756c7cf5d7f38960fd1ee91adfa" exitCode=0 Jan 31 04:14:14 crc kubenswrapper[4827]: I0131 04:14:14.383238 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt" event={"ID":"f21965f4-36e7-4c6b-9377-1da6c40e9b02","Type":"ContainerDied","Data":"942d5457a8b3e345b742d4e0acf06045c9338756c7cf5d7f38960fd1ee91adfa"} Jan 31 04:14:15 crc kubenswrapper[4827]: I0131 04:14:15.776932 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt" Jan 31 04:14:15 crc kubenswrapper[4827]: I0131 04:14:15.917464 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f21965f4-36e7-4c6b-9377-1da6c40e9b02-ssh-key-openstack-edpm-ipam\") pod \"f21965f4-36e7-4c6b-9377-1da6c40e9b02\" (UID: \"f21965f4-36e7-4c6b-9377-1da6c40e9b02\") " Jan 31 04:14:15 crc kubenswrapper[4827]: I0131 04:14:15.917732 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f21965f4-36e7-4c6b-9377-1da6c40e9b02-inventory\") pod \"f21965f4-36e7-4c6b-9377-1da6c40e9b02\" (UID: \"f21965f4-36e7-4c6b-9377-1da6c40e9b02\") " Jan 31 04:14:15 crc kubenswrapper[4827]: I0131 04:14:15.917784 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8mvr\" (UniqueName: \"kubernetes.io/projected/f21965f4-36e7-4c6b-9377-1da6c40e9b02-kube-api-access-l8mvr\") pod \"f21965f4-36e7-4c6b-9377-1da6c40e9b02\" (UID: \"f21965f4-36e7-4c6b-9377-1da6c40e9b02\") " Jan 31 04:14:15 crc kubenswrapper[4827]: I0131 04:14:15.925491 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f21965f4-36e7-4c6b-9377-1da6c40e9b02-kube-api-access-l8mvr" (OuterVolumeSpecName: "kube-api-access-l8mvr") pod "f21965f4-36e7-4c6b-9377-1da6c40e9b02" (UID: "f21965f4-36e7-4c6b-9377-1da6c40e9b02"). InnerVolumeSpecName "kube-api-access-l8mvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:14:15 crc kubenswrapper[4827]: I0131 04:14:15.948842 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21965f4-36e7-4c6b-9377-1da6c40e9b02-inventory" (OuterVolumeSpecName: "inventory") pod "f21965f4-36e7-4c6b-9377-1da6c40e9b02" (UID: "f21965f4-36e7-4c6b-9377-1da6c40e9b02"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:14:15 crc kubenswrapper[4827]: I0131 04:14:15.949764 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21965f4-36e7-4c6b-9377-1da6c40e9b02-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f21965f4-36e7-4c6b-9377-1da6c40e9b02" (UID: "f21965f4-36e7-4c6b-9377-1da6c40e9b02"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.019621 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f21965f4-36e7-4c6b-9377-1da6c40e9b02-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.019866 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8mvr\" (UniqueName: \"kubernetes.io/projected/f21965f4-36e7-4c6b-9377-1da6c40e9b02-kube-api-access-l8mvr\") on node \"crc\" DevicePath \"\"" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.019995 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f21965f4-36e7-4c6b-9377-1da6c40e9b02-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.120931 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08a102ee-af73-49b8-8c30-094871ea6ae8" path="/var/lib/kubelet/pods/08a102ee-af73-49b8-8c30-094871ea6ae8/volumes" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.121689 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e05cffc-7368-4346-9b36-fbe0c99c2397" path="/var/lib/kubelet/pods/8e05cffc-7368-4346-9b36-fbe0c99c2397/volumes" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.122286 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d641d15b-7085-430a-9adb-69c0e94d52e3" path="/var/lib/kubelet/pods/d641d15b-7085-430a-9adb-69c0e94d52e3/volumes" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.122793 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e737ec5b-5af1-4082-86b7-f6571ce8bd36" path="/var/lib/kubelet/pods/e737ec5b-5af1-4082-86b7-f6571ce8bd36/volumes" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.123808 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2d19d1f-3276-4409-8071-ccdac4eb4e6c" path="/var/lib/kubelet/pods/f2d19d1f-3276-4409-8071-ccdac4eb4e6c/volumes" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.414030 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt" event={"ID":"f21965f4-36e7-4c6b-9377-1da6c40e9b02","Type":"ContainerDied","Data":"03f840da5926741fbd11139a28b392629eba1046a8b9f628eceebbafd9e7eaa0"} Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.414449 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03f840da5926741fbd11139a28b392629eba1046a8b9f628eceebbafd9e7eaa0" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.414091 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.477263 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8"] Jan 31 04:14:16 crc kubenswrapper[4827]: E0131 04:14:16.477586 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f21965f4-36e7-4c6b-9377-1da6c40e9b02" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.477602 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21965f4-36e7-4c6b-9377-1da6c40e9b02" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 04:14:16 crc kubenswrapper[4827]: E0131 04:14:16.477616 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e92acd7-0ac1-4182-8d98-1a540e144fa1" containerName="extract-utilities" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.477622 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e92acd7-0ac1-4182-8d98-1a540e144fa1" containerName="extract-utilities" Jan 31 04:14:16 crc kubenswrapper[4827]: E0131 04:14:16.477630 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e92acd7-0ac1-4182-8d98-1a540e144fa1" containerName="extract-content" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.477637 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e92acd7-0ac1-4182-8d98-1a540e144fa1" containerName="extract-content" Jan 31 04:14:16 crc kubenswrapper[4827]: E0131 04:14:16.477649 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e92acd7-0ac1-4182-8d98-1a540e144fa1" containerName="registry-server" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.477655 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e92acd7-0ac1-4182-8d98-1a540e144fa1" containerName="registry-server" Jan 31 04:14:16 crc kubenswrapper[4827]: E0131 04:14:16.477665 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c69e48aa-b820-4027-a322-cca18339d441" containerName="registry-server" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.477672 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c69e48aa-b820-4027-a322-cca18339d441" containerName="registry-server" Jan 31 04:14:16 crc kubenswrapper[4827]: E0131 04:14:16.477685 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c69e48aa-b820-4027-a322-cca18339d441" containerName="extract-content" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.477691 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c69e48aa-b820-4027-a322-cca18339d441" containerName="extract-content" Jan 31 04:14:16 crc kubenswrapper[4827]: E0131 04:14:16.477702 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c69e48aa-b820-4027-a322-cca18339d441" containerName="extract-utilities" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.477708 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c69e48aa-b820-4027-a322-cca18339d441" containerName="extract-utilities" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.477842 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c69e48aa-b820-4027-a322-cca18339d441" containerName="registry-server" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.477858 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e92acd7-0ac1-4182-8d98-1a540e144fa1" containerName="registry-server" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.477892 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="f21965f4-36e7-4c6b-9377-1da6c40e9b02" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.478379 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.481297 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.481482 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.481706 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.481975 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.501262 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8"] Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.632931 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b51cb513-8fdc-411c-a9c0-f6065b19ef8d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8\" (UID: \"b51cb513-8fdc-411c-a9c0-f6065b19ef8d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.633019 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b51cb513-8fdc-411c-a9c0-f6065b19ef8d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8\" (UID: \"b51cb513-8fdc-411c-a9c0-f6065b19ef8d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.633100 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tlc7\" (UniqueName: \"kubernetes.io/projected/b51cb513-8fdc-411c-a9c0-f6065b19ef8d-kube-api-access-9tlc7\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8\" (UID: \"b51cb513-8fdc-411c-a9c0-f6065b19ef8d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.735021 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b51cb513-8fdc-411c-a9c0-f6065b19ef8d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8\" (UID: \"b51cb513-8fdc-411c-a9c0-f6065b19ef8d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.735088 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b51cb513-8fdc-411c-a9c0-f6065b19ef8d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8\" (UID: \"b51cb513-8fdc-411c-a9c0-f6065b19ef8d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.735146 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tlc7\" (UniqueName: \"kubernetes.io/projected/b51cb513-8fdc-411c-a9c0-f6065b19ef8d-kube-api-access-9tlc7\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8\" (UID: \"b51cb513-8fdc-411c-a9c0-f6065b19ef8d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.750550 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b51cb513-8fdc-411c-a9c0-f6065b19ef8d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8\" (UID: \"b51cb513-8fdc-411c-a9c0-f6065b19ef8d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.750560 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b51cb513-8fdc-411c-a9c0-f6065b19ef8d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8\" (UID: \"b51cb513-8fdc-411c-a9c0-f6065b19ef8d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.757536 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tlc7\" (UniqueName: \"kubernetes.io/projected/b51cb513-8fdc-411c-a9c0-f6065b19ef8d-kube-api-access-9tlc7\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8\" (UID: \"b51cb513-8fdc-411c-a9c0-f6065b19ef8d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8" Jan 31 04:14:16 crc kubenswrapper[4827]: I0131 04:14:16.805423 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8" Jan 31 04:14:17 crc kubenswrapper[4827]: I0131 04:14:17.343737 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8"] Jan 31 04:14:17 crc kubenswrapper[4827]: I0131 04:14:17.422431 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8" event={"ID":"b51cb513-8fdc-411c-a9c0-f6065b19ef8d","Type":"ContainerStarted","Data":"0956610a1bad9e62fa5af357c011162b10380b4bc853d7202d9044b8136f2519"} Jan 31 04:14:18 crc kubenswrapper[4827]: I0131 04:14:18.430983 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8" event={"ID":"b51cb513-8fdc-411c-a9c0-f6065b19ef8d","Type":"ContainerStarted","Data":"7e00cc5d9e0bdb9ec6999521d593cf739eff05f3086722432c0dd62a12b8210a"} Jan 31 04:14:18 crc kubenswrapper[4827]: I0131 04:14:18.458636 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8" podStartSLOduration=1.859136025 podStartE2EDuration="2.458614985s" podCreationTimestamp="2026-01-31 04:14:16 +0000 UTC" firstStartedPulling="2026-01-31 04:14:17.344772976 +0000 UTC m=+1650.031853435" lastFinishedPulling="2026-01-31 04:14:17.944251916 +0000 UTC m=+1650.631332395" observedRunningTime="2026-01-31 04:14:18.450841937 +0000 UTC m=+1651.137922406" watchObservedRunningTime="2026-01-31 04:14:18.458614985 +0000 UTC m=+1651.145695434" Jan 31 04:14:23 crc kubenswrapper[4827]: I0131 04:14:23.480764 4827 generic.go:334] "Generic (PLEG): container finished" podID="b51cb513-8fdc-411c-a9c0-f6065b19ef8d" containerID="7e00cc5d9e0bdb9ec6999521d593cf739eff05f3086722432c0dd62a12b8210a" exitCode=0 Jan 31 04:14:23 crc kubenswrapper[4827]: I0131 04:14:23.480923 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8" event={"ID":"b51cb513-8fdc-411c-a9c0-f6065b19ef8d","Type":"ContainerDied","Data":"7e00cc5d9e0bdb9ec6999521d593cf739eff05f3086722432c0dd62a12b8210a"} Jan 31 04:14:24 crc kubenswrapper[4827]: I0131 04:14:24.941200 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.004906 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b51cb513-8fdc-411c-a9c0-f6065b19ef8d-inventory\") pod \"b51cb513-8fdc-411c-a9c0-f6065b19ef8d\" (UID: \"b51cb513-8fdc-411c-a9c0-f6065b19ef8d\") " Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.005008 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b51cb513-8fdc-411c-a9c0-f6065b19ef8d-ssh-key-openstack-edpm-ipam\") pod \"b51cb513-8fdc-411c-a9c0-f6065b19ef8d\" (UID: \"b51cb513-8fdc-411c-a9c0-f6065b19ef8d\") " Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.005088 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tlc7\" (UniqueName: \"kubernetes.io/projected/b51cb513-8fdc-411c-a9c0-f6065b19ef8d-kube-api-access-9tlc7\") pod \"b51cb513-8fdc-411c-a9c0-f6065b19ef8d\" (UID: \"b51cb513-8fdc-411c-a9c0-f6065b19ef8d\") " Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.019162 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b51cb513-8fdc-411c-a9c0-f6065b19ef8d-kube-api-access-9tlc7" (OuterVolumeSpecName: "kube-api-access-9tlc7") pod "b51cb513-8fdc-411c-a9c0-f6065b19ef8d" (UID: "b51cb513-8fdc-411c-a9c0-f6065b19ef8d"). InnerVolumeSpecName "kube-api-access-9tlc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.029580 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51cb513-8fdc-411c-a9c0-f6065b19ef8d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b51cb513-8fdc-411c-a9c0-f6065b19ef8d" (UID: "b51cb513-8fdc-411c-a9c0-f6065b19ef8d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.045066 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51cb513-8fdc-411c-a9c0-f6065b19ef8d-inventory" (OuterVolumeSpecName: "inventory") pod "b51cb513-8fdc-411c-a9c0-f6065b19ef8d" (UID: "b51cb513-8fdc-411c-a9c0-f6065b19ef8d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.106807 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b51cb513-8fdc-411c-a9c0-f6065b19ef8d-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.106849 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b51cb513-8fdc-411c-a9c0-f6065b19ef8d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.106861 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tlc7\" (UniqueName: \"kubernetes.io/projected/b51cb513-8fdc-411c-a9c0-f6065b19ef8d-kube-api-access-9tlc7\") on node \"crc\" DevicePath \"\"" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.109730 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:14:25 crc kubenswrapper[4827]: E0131 04:14:25.109993 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.502875 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8" event={"ID":"b51cb513-8fdc-411c-a9c0-f6065b19ef8d","Type":"ContainerDied","Data":"0956610a1bad9e62fa5af357c011162b10380b4bc853d7202d9044b8136f2519"} Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.502932 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0956610a1bad9e62fa5af357c011162b10380b4bc853d7202d9044b8136f2519" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.502994 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.579725 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j"] Jan 31 04:14:25 crc kubenswrapper[4827]: E0131 04:14:25.580151 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b51cb513-8fdc-411c-a9c0-f6065b19ef8d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.580166 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b51cb513-8fdc-411c-a9c0-f6065b19ef8d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.580366 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="b51cb513-8fdc-411c-a9c0-f6065b19ef8d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.581037 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.583187 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.584713 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.584897 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.591170 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.615955 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j"] Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.720565 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqlrt\" (UniqueName: \"kubernetes.io/projected/13ca4fc0-b1d0-439d-917c-ceac9179e614-kube-api-access-qqlrt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pht2j\" (UID: \"13ca4fc0-b1d0-439d-917c-ceac9179e614\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.721186 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13ca4fc0-b1d0-439d-917c-ceac9179e614-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pht2j\" (UID: \"13ca4fc0-b1d0-439d-917c-ceac9179e614\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.721376 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13ca4fc0-b1d0-439d-917c-ceac9179e614-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pht2j\" (UID: \"13ca4fc0-b1d0-439d-917c-ceac9179e614\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.823813 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqlrt\" (UniqueName: \"kubernetes.io/projected/13ca4fc0-b1d0-439d-917c-ceac9179e614-kube-api-access-qqlrt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pht2j\" (UID: \"13ca4fc0-b1d0-439d-917c-ceac9179e614\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.824053 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13ca4fc0-b1d0-439d-917c-ceac9179e614-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pht2j\" (UID: \"13ca4fc0-b1d0-439d-917c-ceac9179e614\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.824105 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13ca4fc0-b1d0-439d-917c-ceac9179e614-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pht2j\" (UID: \"13ca4fc0-b1d0-439d-917c-ceac9179e614\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.828239 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13ca4fc0-b1d0-439d-917c-ceac9179e614-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pht2j\" (UID: \"13ca4fc0-b1d0-439d-917c-ceac9179e614\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.830066 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13ca4fc0-b1d0-439d-917c-ceac9179e614-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pht2j\" (UID: \"13ca4fc0-b1d0-439d-917c-ceac9179e614\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.851610 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqlrt\" (UniqueName: \"kubernetes.io/projected/13ca4fc0-b1d0-439d-917c-ceac9179e614-kube-api-access-qqlrt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pht2j\" (UID: \"13ca4fc0-b1d0-439d-917c-ceac9179e614\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j" Jan 31 04:14:25 crc kubenswrapper[4827]: I0131 04:14:25.909387 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j" Jan 31 04:14:26 crc kubenswrapper[4827]: I0131 04:14:26.038765 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-jdjsc"] Jan 31 04:14:26 crc kubenswrapper[4827]: I0131 04:14:26.046050 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-jdjsc"] Jan 31 04:14:26 crc kubenswrapper[4827]: I0131 04:14:26.119570 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fbae680-6791-4843-a38d-dce4d7531d9a" path="/var/lib/kubelet/pods/4fbae680-6791-4843-a38d-dce4d7531d9a/volumes" Jan 31 04:14:26 crc kubenswrapper[4827]: I0131 04:14:26.895369 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j"] Jan 31 04:14:27 crc kubenswrapper[4827]: I0131 04:14:27.523404 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j" event={"ID":"13ca4fc0-b1d0-439d-917c-ceac9179e614","Type":"ContainerStarted","Data":"221269fc8567500ca90c39982dd33d1a20cbe8e33aa4f4825af99dadb9066a4e"} Jan 31 04:14:28 crc kubenswrapper[4827]: I0131 04:14:28.531605 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j" event={"ID":"13ca4fc0-b1d0-439d-917c-ceac9179e614","Type":"ContainerStarted","Data":"07b106240658fe167ec56c5ef19617568b2c60f29b56621ad79efb986d92b7de"} Jan 31 04:14:28 crc kubenswrapper[4827]: I0131 04:14:28.552993 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j" podStartSLOduration=3.132377174 podStartE2EDuration="3.552975017s" podCreationTimestamp="2026-01-31 04:14:25 +0000 UTC" firstStartedPulling="2026-01-31 04:14:26.90417706 +0000 UTC m=+1659.591257509" lastFinishedPulling="2026-01-31 04:14:27.324774873 +0000 UTC m=+1660.011855352" observedRunningTime="2026-01-31 04:14:28.543600401 +0000 UTC m=+1661.230680860" watchObservedRunningTime="2026-01-31 04:14:28.552975017 +0000 UTC m=+1661.240055476" Jan 31 04:14:39 crc kubenswrapper[4827]: I0131 04:14:39.111090 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:14:39 crc kubenswrapper[4827]: E0131 04:14:39.112164 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:14:42 crc kubenswrapper[4827]: I0131 04:14:42.057935 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-k45zm"] Jan 31 04:14:42 crc kubenswrapper[4827]: I0131 04:14:42.067017 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-k45zm"] Jan 31 04:14:42 crc kubenswrapper[4827]: I0131 04:14:42.126448 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2566a364-c569-475e-b757-81be89061c81" path="/var/lib/kubelet/pods/2566a364-c569-475e-b757-81be89061c81/volumes" Jan 31 04:14:50 crc kubenswrapper[4827]: I0131 04:14:50.110654 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:14:50 crc kubenswrapper[4827]: E0131 04:14:50.111703 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:14:55 crc kubenswrapper[4827]: I0131 04:14:55.044322 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-c5fwt"] Jan 31 04:14:55 crc kubenswrapper[4827]: I0131 04:14:55.055172 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e795-account-create-update-6vlcb"] Jan 31 04:14:55 crc kubenswrapper[4827]: I0131 04:14:55.061690 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-c5fwt"] Jan 31 04:14:55 crc kubenswrapper[4827]: I0131 04:14:55.068494 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-sgdlb"] Jan 31 04:14:55 crc kubenswrapper[4827]: I0131 04:14:55.075235 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e795-account-create-update-6vlcb"] Jan 31 04:14:55 crc kubenswrapper[4827]: I0131 04:14:55.081866 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-sgdlb"] Jan 31 04:14:56 crc kubenswrapper[4827]: I0131 04:14:56.123810 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cba650c-b1cb-43e0-b831-d2289e50036f" path="/var/lib/kubelet/pods/8cba650c-b1cb-43e0-b831-d2289e50036f/volumes" Jan 31 04:14:56 crc kubenswrapper[4827]: I0131 04:14:56.125155 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afb22e97-f599-49e5-8cde-ddb7bb682dd4" path="/var/lib/kubelet/pods/afb22e97-f599-49e5-8cde-ddb7bb682dd4/volumes" Jan 31 04:14:56 crc kubenswrapper[4827]: I0131 04:14:56.125839 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9330fdf-710f-4fff-b064-d5ab07f73cb2" path="/var/lib/kubelet/pods/b9330fdf-710f-4fff-b064-d5ab07f73cb2/volumes" Jan 31 04:14:58 crc kubenswrapper[4827]: I0131 04:14:58.035868 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-d351-account-create-update-jnlqc"] Jan 31 04:14:58 crc kubenswrapper[4827]: I0131 04:14:58.045399 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-d351-account-create-update-jnlqc"] Jan 31 04:14:58 crc kubenswrapper[4827]: I0131 04:14:58.135066 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2" path="/var/lib/kubelet/pods/1c08b19d-c1e1-4d21-ad8a-03207b8ac8c2/volumes" Jan 31 04:14:59 crc kubenswrapper[4827]: I0131 04:14:59.034416 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-5kr5x"] Jan 31 04:14:59 crc kubenswrapper[4827]: I0131 04:14:59.043013 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3471-account-create-update-g5xhk"] Jan 31 04:14:59 crc kubenswrapper[4827]: I0131 04:14:59.050067 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3471-account-create-update-g5xhk"] Jan 31 04:14:59 crc kubenswrapper[4827]: I0131 04:14:59.056963 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-5kr5x"] Jan 31 04:14:59 crc kubenswrapper[4827]: I0131 04:14:59.185688 4827 scope.go:117] "RemoveContainer" containerID="e0b19c9fa7ed0479190b4ce86c1c9fbafd19b330aed0fdca7cc4835ac2a362d5" Jan 31 04:14:59 crc kubenswrapper[4827]: I0131 04:14:59.224501 4827 scope.go:117] "RemoveContainer" containerID="c1a8c1dfd49df110b86e5b5f437cb6c87b475cdd130fa713a2e2b1edcb64dfb6" Jan 31 04:14:59 crc kubenswrapper[4827]: I0131 04:14:59.258540 4827 scope.go:117] "RemoveContainer" containerID="230da5d0e72eb6429b9cc8e891da0be6ab3c52cf56690b6e4548caa8c9748259" Jan 31 04:14:59 crc kubenswrapper[4827]: I0131 04:14:59.297598 4827 scope.go:117] "RemoveContainer" containerID="020c483397b073b236c826b3b98d8cb8fabdc81507992175547b9368210475c1" Jan 31 04:14:59 crc kubenswrapper[4827]: I0131 04:14:59.331163 4827 scope.go:117] "RemoveContainer" containerID="a48bebd4f582bea4e2f90c202c9b2cd36a43f399bb577e4ed5e03a3125500f04" Jan 31 04:14:59 crc kubenswrapper[4827]: I0131 04:14:59.368371 4827 scope.go:117] "RemoveContainer" containerID="01277b2c363bd9580fdd4a81bb6cd090fd028e4a20da528f6792ae0f63637624" Jan 31 04:14:59 crc kubenswrapper[4827]: I0131 04:14:59.409908 4827 scope.go:117] "RemoveContainer" containerID="d610e0ad6beb400271025f49fff576e77fa1ce0ac6b5d971fe67ee33e8800f43" Jan 31 04:14:59 crc kubenswrapper[4827]: I0131 04:14:59.430347 4827 scope.go:117] "RemoveContainer" containerID="0a79feeb727ae99032b7fc3b9e16586a3ef594f1b2389c6e0cd409f292a5c7f4" Jan 31 04:14:59 crc kubenswrapper[4827]: I0131 04:14:59.465571 4827 scope.go:117] "RemoveContainer" containerID="61b91764218d0f1084637f991df0fcccda7b250ca8fa610e4b470dfec48d4775" Jan 31 04:14:59 crc kubenswrapper[4827]: I0131 04:14:59.503366 4827 scope.go:117] "RemoveContainer" containerID="bb3f3f878a6228d0abf9de95bac3285472ae598fceb10d256bbb1004ef2408f8" Jan 31 04:14:59 crc kubenswrapper[4827]: I0131 04:14:59.521171 4827 scope.go:117] "RemoveContainer" containerID="aa0eb20ba16e9776433901ea2f6918809c86f50adc9a419017677f88edcd976c" Jan 31 04:14:59 crc kubenswrapper[4827]: I0131 04:14:59.540207 4827 scope.go:117] "RemoveContainer" containerID="3425b4ad9f4096a237dad07d34c50fc0d9a61f16f35854be754b868fa0cdee28" Jan 31 04:15:00 crc kubenswrapper[4827]: I0131 04:15:00.127516 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f0110cb-d427-4d1b-a2d1-551270a63093" path="/var/lib/kubelet/pods/1f0110cb-d427-4d1b-a2d1-551270a63093/volumes" Jan 31 04:15:00 crc kubenswrapper[4827]: I0131 04:15:00.128097 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="470ad1f2-aae7-4a4c-8258-648066d14ec9" path="/var/lib/kubelet/pods/470ad1f2-aae7-4a4c-8258-648066d14ec9/volumes" Jan 31 04:15:00 crc kubenswrapper[4827]: I0131 04:15:00.166627 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497215-fsq2k"] Jan 31 04:15:00 crc kubenswrapper[4827]: I0131 04:15:00.168309 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-fsq2k" Jan 31 04:15:00 crc kubenswrapper[4827]: I0131 04:15:00.170641 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 04:15:00 crc kubenswrapper[4827]: I0131 04:15:00.171211 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdzj9\" (UniqueName: \"kubernetes.io/projected/b91d70fa-004b-4553-9973-e7a67f721e9f-kube-api-access-vdzj9\") pod \"collect-profiles-29497215-fsq2k\" (UID: \"b91d70fa-004b-4553-9973-e7a67f721e9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-fsq2k" Jan 31 04:15:00 crc kubenswrapper[4827]: I0131 04:15:00.171440 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 04:15:00 crc kubenswrapper[4827]: I0131 04:15:00.171629 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b91d70fa-004b-4553-9973-e7a67f721e9f-secret-volume\") pod \"collect-profiles-29497215-fsq2k\" (UID: \"b91d70fa-004b-4553-9973-e7a67f721e9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-fsq2k" Jan 31 04:15:00 crc kubenswrapper[4827]: I0131 04:15:00.171752 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b91d70fa-004b-4553-9973-e7a67f721e9f-config-volume\") pod \"collect-profiles-29497215-fsq2k\" (UID: \"b91d70fa-004b-4553-9973-e7a67f721e9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-fsq2k" Jan 31 04:15:00 crc kubenswrapper[4827]: I0131 04:15:00.177531 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497215-fsq2k"] Jan 31 04:15:00 crc kubenswrapper[4827]: I0131 04:15:00.273963 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdzj9\" (UniqueName: \"kubernetes.io/projected/b91d70fa-004b-4553-9973-e7a67f721e9f-kube-api-access-vdzj9\") pod \"collect-profiles-29497215-fsq2k\" (UID: \"b91d70fa-004b-4553-9973-e7a67f721e9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-fsq2k" Jan 31 04:15:00 crc kubenswrapper[4827]: I0131 04:15:00.274664 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b91d70fa-004b-4553-9973-e7a67f721e9f-secret-volume\") pod \"collect-profiles-29497215-fsq2k\" (UID: \"b91d70fa-004b-4553-9973-e7a67f721e9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-fsq2k" Jan 31 04:15:00 crc kubenswrapper[4827]: I0131 04:15:00.275510 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b91d70fa-004b-4553-9973-e7a67f721e9f-config-volume\") pod \"collect-profiles-29497215-fsq2k\" (UID: \"b91d70fa-004b-4553-9973-e7a67f721e9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-fsq2k" Jan 31 04:15:00 crc kubenswrapper[4827]: I0131 04:15:00.276369 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b91d70fa-004b-4553-9973-e7a67f721e9f-config-volume\") pod \"collect-profiles-29497215-fsq2k\" (UID: \"b91d70fa-004b-4553-9973-e7a67f721e9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-fsq2k" Jan 31 04:15:00 crc kubenswrapper[4827]: I0131 04:15:00.290590 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b91d70fa-004b-4553-9973-e7a67f721e9f-secret-volume\") pod \"collect-profiles-29497215-fsq2k\" (UID: \"b91d70fa-004b-4553-9973-e7a67f721e9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-fsq2k" Jan 31 04:15:00 crc kubenswrapper[4827]: I0131 04:15:00.291152 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdzj9\" (UniqueName: \"kubernetes.io/projected/b91d70fa-004b-4553-9973-e7a67f721e9f-kube-api-access-vdzj9\") pod \"collect-profiles-29497215-fsq2k\" (UID: \"b91d70fa-004b-4553-9973-e7a67f721e9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-fsq2k" Jan 31 04:15:00 crc kubenswrapper[4827]: I0131 04:15:00.569138 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-fsq2k" Jan 31 04:15:01 crc kubenswrapper[4827]: I0131 04:15:01.018814 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497215-fsq2k"] Jan 31 04:15:01 crc kubenswrapper[4827]: I0131 04:15:01.878514 4827 generic.go:334] "Generic (PLEG): container finished" podID="b91d70fa-004b-4553-9973-e7a67f721e9f" containerID="fc627a06af1e44e13fac77f76ecf4bb630356dc9fd3df5f1ab8c154d5f3538c7" exitCode=0 Jan 31 04:15:01 crc kubenswrapper[4827]: I0131 04:15:01.878556 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-fsq2k" event={"ID":"b91d70fa-004b-4553-9973-e7a67f721e9f","Type":"ContainerDied","Data":"fc627a06af1e44e13fac77f76ecf4bb630356dc9fd3df5f1ab8c154d5f3538c7"} Jan 31 04:15:01 crc kubenswrapper[4827]: I0131 04:15:01.878582 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-fsq2k" event={"ID":"b91d70fa-004b-4553-9973-e7a67f721e9f","Type":"ContainerStarted","Data":"340a41e1a5cfacee776110f41129b1b371c954f22d025113bd5febdaa99e1783"} Jan 31 04:15:02 crc kubenswrapper[4827]: I0131 04:15:02.111284 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:15:02 crc kubenswrapper[4827]: E0131 04:15:02.111872 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:15:02 crc kubenswrapper[4827]: I0131 04:15:02.887808 4827 generic.go:334] "Generic (PLEG): container finished" podID="13ca4fc0-b1d0-439d-917c-ceac9179e614" containerID="07b106240658fe167ec56c5ef19617568b2c60f29b56621ad79efb986d92b7de" exitCode=0 Jan 31 04:15:02 crc kubenswrapper[4827]: I0131 04:15:02.887936 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j" event={"ID":"13ca4fc0-b1d0-439d-917c-ceac9179e614","Type":"ContainerDied","Data":"07b106240658fe167ec56c5ef19617568b2c60f29b56621ad79efb986d92b7de"} Jan 31 04:15:03 crc kubenswrapper[4827]: I0131 04:15:03.188978 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-fsq2k" Jan 31 04:15:03 crc kubenswrapper[4827]: I0131 04:15:03.328295 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdzj9\" (UniqueName: \"kubernetes.io/projected/b91d70fa-004b-4553-9973-e7a67f721e9f-kube-api-access-vdzj9\") pod \"b91d70fa-004b-4553-9973-e7a67f721e9f\" (UID: \"b91d70fa-004b-4553-9973-e7a67f721e9f\") " Jan 31 04:15:03 crc kubenswrapper[4827]: I0131 04:15:03.328524 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b91d70fa-004b-4553-9973-e7a67f721e9f-secret-volume\") pod \"b91d70fa-004b-4553-9973-e7a67f721e9f\" (UID: \"b91d70fa-004b-4553-9973-e7a67f721e9f\") " Jan 31 04:15:03 crc kubenswrapper[4827]: I0131 04:15:03.328591 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b91d70fa-004b-4553-9973-e7a67f721e9f-config-volume\") pod \"b91d70fa-004b-4553-9973-e7a67f721e9f\" (UID: \"b91d70fa-004b-4553-9973-e7a67f721e9f\") " Jan 31 04:15:03 crc kubenswrapper[4827]: I0131 04:15:03.329167 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b91d70fa-004b-4553-9973-e7a67f721e9f-config-volume" (OuterVolumeSpecName: "config-volume") pod "b91d70fa-004b-4553-9973-e7a67f721e9f" (UID: "b91d70fa-004b-4553-9973-e7a67f721e9f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:15:03 crc kubenswrapper[4827]: I0131 04:15:03.334414 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b91d70fa-004b-4553-9973-e7a67f721e9f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b91d70fa-004b-4553-9973-e7a67f721e9f" (UID: "b91d70fa-004b-4553-9973-e7a67f721e9f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:15:03 crc kubenswrapper[4827]: I0131 04:15:03.334479 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b91d70fa-004b-4553-9973-e7a67f721e9f-kube-api-access-vdzj9" (OuterVolumeSpecName: "kube-api-access-vdzj9") pod "b91d70fa-004b-4553-9973-e7a67f721e9f" (UID: "b91d70fa-004b-4553-9973-e7a67f721e9f"). InnerVolumeSpecName "kube-api-access-vdzj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:15:03 crc kubenswrapper[4827]: I0131 04:15:03.430585 4827 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b91d70fa-004b-4553-9973-e7a67f721e9f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:15:03 crc kubenswrapper[4827]: I0131 04:15:03.430621 4827 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b91d70fa-004b-4553-9973-e7a67f721e9f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:15:03 crc kubenswrapper[4827]: I0131 04:15:03.430632 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdzj9\" (UniqueName: \"kubernetes.io/projected/b91d70fa-004b-4553-9973-e7a67f721e9f-kube-api-access-vdzj9\") on node \"crc\" DevicePath \"\"" Jan 31 04:15:03 crc kubenswrapper[4827]: I0131 04:15:03.900825 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-fsq2k" Jan 31 04:15:03 crc kubenswrapper[4827]: I0131 04:15:03.900860 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-fsq2k" event={"ID":"b91d70fa-004b-4553-9973-e7a67f721e9f","Type":"ContainerDied","Data":"340a41e1a5cfacee776110f41129b1b371c954f22d025113bd5febdaa99e1783"} Jan 31 04:15:03 crc kubenswrapper[4827]: I0131 04:15:03.900944 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="340a41e1a5cfacee776110f41129b1b371c954f22d025113bd5febdaa99e1783" Jan 31 04:15:04 crc kubenswrapper[4827]: I0131 04:15:04.313626 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j" Jan 31 04:15:04 crc kubenswrapper[4827]: I0131 04:15:04.345864 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13ca4fc0-b1d0-439d-917c-ceac9179e614-inventory\") pod \"13ca4fc0-b1d0-439d-917c-ceac9179e614\" (UID: \"13ca4fc0-b1d0-439d-917c-ceac9179e614\") " Jan 31 04:15:04 crc kubenswrapper[4827]: I0131 04:15:04.345977 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqlrt\" (UniqueName: \"kubernetes.io/projected/13ca4fc0-b1d0-439d-917c-ceac9179e614-kube-api-access-qqlrt\") pod \"13ca4fc0-b1d0-439d-917c-ceac9179e614\" (UID: \"13ca4fc0-b1d0-439d-917c-ceac9179e614\") " Jan 31 04:15:04 crc kubenswrapper[4827]: I0131 04:15:04.346028 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13ca4fc0-b1d0-439d-917c-ceac9179e614-ssh-key-openstack-edpm-ipam\") pod \"13ca4fc0-b1d0-439d-917c-ceac9179e614\" (UID: \"13ca4fc0-b1d0-439d-917c-ceac9179e614\") " Jan 31 04:15:04 crc kubenswrapper[4827]: I0131 04:15:04.361143 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13ca4fc0-b1d0-439d-917c-ceac9179e614-kube-api-access-qqlrt" (OuterVolumeSpecName: "kube-api-access-qqlrt") pod "13ca4fc0-b1d0-439d-917c-ceac9179e614" (UID: "13ca4fc0-b1d0-439d-917c-ceac9179e614"). InnerVolumeSpecName "kube-api-access-qqlrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:15:04 crc kubenswrapper[4827]: I0131 04:15:04.375264 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ca4fc0-b1d0-439d-917c-ceac9179e614-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "13ca4fc0-b1d0-439d-917c-ceac9179e614" (UID: "13ca4fc0-b1d0-439d-917c-ceac9179e614"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:15:04 crc kubenswrapper[4827]: I0131 04:15:04.378769 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ca4fc0-b1d0-439d-917c-ceac9179e614-inventory" (OuterVolumeSpecName: "inventory") pod "13ca4fc0-b1d0-439d-917c-ceac9179e614" (UID: "13ca4fc0-b1d0-439d-917c-ceac9179e614"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:15:04 crc kubenswrapper[4827]: I0131 04:15:04.447491 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqlrt\" (UniqueName: \"kubernetes.io/projected/13ca4fc0-b1d0-439d-917c-ceac9179e614-kube-api-access-qqlrt\") on node \"crc\" DevicePath \"\"" Jan 31 04:15:04 crc kubenswrapper[4827]: I0131 04:15:04.447531 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13ca4fc0-b1d0-439d-917c-ceac9179e614-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:15:04 crc kubenswrapper[4827]: I0131 04:15:04.447547 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13ca4fc0-b1d0-439d-917c-ceac9179e614-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:15:04 crc kubenswrapper[4827]: I0131 04:15:04.909582 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j" event={"ID":"13ca4fc0-b1d0-439d-917c-ceac9179e614","Type":"ContainerDied","Data":"221269fc8567500ca90c39982dd33d1a20cbe8e33aa4f4825af99dadb9066a4e"} Jan 31 04:15:04 crc kubenswrapper[4827]: I0131 04:15:04.911079 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="221269fc8567500ca90c39982dd33d1a20cbe8e33aa4f4825af99dadb9066a4e" Jan 31 04:15:04 crc kubenswrapper[4827]: I0131 04:15:04.909616 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j" Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.038721 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl"] Jan 31 04:15:05 crc kubenswrapper[4827]: E0131 04:15:05.039322 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b91d70fa-004b-4553-9973-e7a67f721e9f" containerName="collect-profiles" Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.039347 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b91d70fa-004b-4553-9973-e7a67f721e9f" containerName="collect-profiles" Jan 31 04:15:05 crc kubenswrapper[4827]: E0131 04:15:05.039373 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13ca4fc0-b1d0-439d-917c-ceac9179e614" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.039385 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="13ca4fc0-b1d0-439d-917c-ceac9179e614" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.039626 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="b91d70fa-004b-4553-9973-e7a67f721e9f" containerName="collect-profiles" Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.039655 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="13ca4fc0-b1d0-439d-917c-ceac9179e614" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.040430 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl" Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.043199 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.043548 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.043920 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.045253 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.051810 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-xmrgk"] Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.058384 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a5bd016-448a-48cc-bf05-11fe0b0040bc-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl\" (UID: \"9a5bd016-448a-48cc-bf05-11fe0b0040bc\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl" Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.058619 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt9br\" (UniqueName: \"kubernetes.io/projected/9a5bd016-448a-48cc-bf05-11fe0b0040bc-kube-api-access-vt9br\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl\" (UID: \"9a5bd016-448a-48cc-bf05-11fe0b0040bc\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl" Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.058737 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a5bd016-448a-48cc-bf05-11fe0b0040bc-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl\" (UID: \"9a5bd016-448a-48cc-bf05-11fe0b0040bc\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl" Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.065615 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-xmrgk"] Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.089845 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl"] Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.161068 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a5bd016-448a-48cc-bf05-11fe0b0040bc-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl\" (UID: \"9a5bd016-448a-48cc-bf05-11fe0b0040bc\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl" Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.161338 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a5bd016-448a-48cc-bf05-11fe0b0040bc-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl\" (UID: \"9a5bd016-448a-48cc-bf05-11fe0b0040bc\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl" Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.161456 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt9br\" (UniqueName: \"kubernetes.io/projected/9a5bd016-448a-48cc-bf05-11fe0b0040bc-kube-api-access-vt9br\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl\" (UID: \"9a5bd016-448a-48cc-bf05-11fe0b0040bc\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl" Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.168589 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a5bd016-448a-48cc-bf05-11fe0b0040bc-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl\" (UID: \"9a5bd016-448a-48cc-bf05-11fe0b0040bc\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl" Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.173317 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a5bd016-448a-48cc-bf05-11fe0b0040bc-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl\" (UID: \"9a5bd016-448a-48cc-bf05-11fe0b0040bc\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl" Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.180598 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt9br\" (UniqueName: \"kubernetes.io/projected/9a5bd016-448a-48cc-bf05-11fe0b0040bc-kube-api-access-vt9br\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl\" (UID: \"9a5bd016-448a-48cc-bf05-11fe0b0040bc\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl" Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.368060 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl" Jan 31 04:15:05 crc kubenswrapper[4827]: W0131 04:15:05.866073 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a5bd016_448a_48cc_bf05_11fe0b0040bc.slice/crio-174188ca51f8b00e7b64f34d124f446c92785656be2d2cc93c2cad4a5da782a3 WatchSource:0}: Error finding container 174188ca51f8b00e7b64f34d124f446c92785656be2d2cc93c2cad4a5da782a3: Status 404 returned error can't find the container with id 174188ca51f8b00e7b64f34d124f446c92785656be2d2cc93c2cad4a5da782a3 Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.866864 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl"] Jan 31 04:15:05 crc kubenswrapper[4827]: I0131 04:15:05.920989 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl" event={"ID":"9a5bd016-448a-48cc-bf05-11fe0b0040bc","Type":"ContainerStarted","Data":"174188ca51f8b00e7b64f34d124f446c92785656be2d2cc93c2cad4a5da782a3"} Jan 31 04:15:06 crc kubenswrapper[4827]: I0131 04:15:06.120745 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75ab4449-6cdb-4b40-a0f0-432667f4ca97" path="/var/lib/kubelet/pods/75ab4449-6cdb-4b40-a0f0-432667f4ca97/volumes" Jan 31 04:15:06 crc kubenswrapper[4827]: I0131 04:15:06.931747 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl" event={"ID":"9a5bd016-448a-48cc-bf05-11fe0b0040bc","Type":"ContainerStarted","Data":"36a8b2041fafff34f60182cd2e917f2ad57711fc0ca2c3993b673c22ae502b01"} Jan 31 04:15:10 crc kubenswrapper[4827]: I0131 04:15:10.980810 4827 generic.go:334] "Generic (PLEG): container finished" podID="9a5bd016-448a-48cc-bf05-11fe0b0040bc" containerID="36a8b2041fafff34f60182cd2e917f2ad57711fc0ca2c3993b673c22ae502b01" exitCode=0 Jan 31 04:15:10 crc kubenswrapper[4827]: I0131 04:15:10.980874 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl" event={"ID":"9a5bd016-448a-48cc-bf05-11fe0b0040bc","Type":"ContainerDied","Data":"36a8b2041fafff34f60182cd2e917f2ad57711fc0ca2c3993b673c22ae502b01"} Jan 31 04:15:12 crc kubenswrapper[4827]: I0131 04:15:12.422950 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl" Jan 31 04:15:12 crc kubenswrapper[4827]: I0131 04:15:12.586868 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a5bd016-448a-48cc-bf05-11fe0b0040bc-inventory\") pod \"9a5bd016-448a-48cc-bf05-11fe0b0040bc\" (UID: \"9a5bd016-448a-48cc-bf05-11fe0b0040bc\") " Jan 31 04:15:12 crc kubenswrapper[4827]: I0131 04:15:12.587053 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a5bd016-448a-48cc-bf05-11fe0b0040bc-ssh-key-openstack-edpm-ipam\") pod \"9a5bd016-448a-48cc-bf05-11fe0b0040bc\" (UID: \"9a5bd016-448a-48cc-bf05-11fe0b0040bc\") " Jan 31 04:15:12 crc kubenswrapper[4827]: I0131 04:15:12.587108 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt9br\" (UniqueName: \"kubernetes.io/projected/9a5bd016-448a-48cc-bf05-11fe0b0040bc-kube-api-access-vt9br\") pod \"9a5bd016-448a-48cc-bf05-11fe0b0040bc\" (UID: \"9a5bd016-448a-48cc-bf05-11fe0b0040bc\") " Jan 31 04:15:12 crc kubenswrapper[4827]: I0131 04:15:12.611109 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a5bd016-448a-48cc-bf05-11fe0b0040bc-kube-api-access-vt9br" (OuterVolumeSpecName: "kube-api-access-vt9br") pod "9a5bd016-448a-48cc-bf05-11fe0b0040bc" (UID: "9a5bd016-448a-48cc-bf05-11fe0b0040bc"). InnerVolumeSpecName "kube-api-access-vt9br". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:15:12 crc kubenswrapper[4827]: I0131 04:15:12.619123 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a5bd016-448a-48cc-bf05-11fe0b0040bc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9a5bd016-448a-48cc-bf05-11fe0b0040bc" (UID: "9a5bd016-448a-48cc-bf05-11fe0b0040bc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:15:12 crc kubenswrapper[4827]: I0131 04:15:12.625067 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a5bd016-448a-48cc-bf05-11fe0b0040bc-inventory" (OuterVolumeSpecName: "inventory") pod "9a5bd016-448a-48cc-bf05-11fe0b0040bc" (UID: "9a5bd016-448a-48cc-bf05-11fe0b0040bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:15:12 crc kubenswrapper[4827]: I0131 04:15:12.688787 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt9br\" (UniqueName: \"kubernetes.io/projected/9a5bd016-448a-48cc-bf05-11fe0b0040bc-kube-api-access-vt9br\") on node \"crc\" DevicePath \"\"" Jan 31 04:15:12 crc kubenswrapper[4827]: I0131 04:15:12.688820 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a5bd016-448a-48cc-bf05-11fe0b0040bc-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:15:12 crc kubenswrapper[4827]: I0131 04:15:12.688835 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a5bd016-448a-48cc-bf05-11fe0b0040bc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.000322 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl" event={"ID":"9a5bd016-448a-48cc-bf05-11fe0b0040bc","Type":"ContainerDied","Data":"174188ca51f8b00e7b64f34d124f446c92785656be2d2cc93c2cad4a5da782a3"} Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.000365 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="174188ca51f8b00e7b64f34d124f446c92785656be2d2cc93c2cad4a5da782a3" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.000422 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.070137 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv"] Jan 31 04:15:13 crc kubenswrapper[4827]: E0131 04:15:13.070640 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a5bd016-448a-48cc-bf05-11fe0b0040bc" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.070667 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a5bd016-448a-48cc-bf05-11fe0b0040bc" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.070929 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a5bd016-448a-48cc-bf05-11fe0b0040bc" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.071730 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.074331 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.074627 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.074988 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.075227 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.080207 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv"] Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.109943 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:15:13 crc kubenswrapper[4827]: E0131 04:15:13.110247 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.196970 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c286cc5f-448d-49d6-8b78-9d84ba0e11d9-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-92hbv\" (UID: \"c286cc5f-448d-49d6-8b78-9d84ba0e11d9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.197295 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c286cc5f-448d-49d6-8b78-9d84ba0e11d9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-92hbv\" (UID: \"c286cc5f-448d-49d6-8b78-9d84ba0e11d9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.197668 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84c24\" (UniqueName: \"kubernetes.io/projected/c286cc5f-448d-49d6-8b78-9d84ba0e11d9-kube-api-access-84c24\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-92hbv\" (UID: \"c286cc5f-448d-49d6-8b78-9d84ba0e11d9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.300406 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c286cc5f-448d-49d6-8b78-9d84ba0e11d9-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-92hbv\" (UID: \"c286cc5f-448d-49d6-8b78-9d84ba0e11d9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.300862 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c286cc5f-448d-49d6-8b78-9d84ba0e11d9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-92hbv\" (UID: \"c286cc5f-448d-49d6-8b78-9d84ba0e11d9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.301075 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84c24\" (UniqueName: \"kubernetes.io/projected/c286cc5f-448d-49d6-8b78-9d84ba0e11d9-kube-api-access-84c24\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-92hbv\" (UID: \"c286cc5f-448d-49d6-8b78-9d84ba0e11d9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.306528 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c286cc5f-448d-49d6-8b78-9d84ba0e11d9-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-92hbv\" (UID: \"c286cc5f-448d-49d6-8b78-9d84ba0e11d9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.318709 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c286cc5f-448d-49d6-8b78-9d84ba0e11d9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-92hbv\" (UID: \"c286cc5f-448d-49d6-8b78-9d84ba0e11d9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.320304 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84c24\" (UniqueName: \"kubernetes.io/projected/c286cc5f-448d-49d6-8b78-9d84ba0e11d9-kube-api-access-84c24\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-92hbv\" (UID: \"c286cc5f-448d-49d6-8b78-9d84ba0e11d9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.393483 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv" Jan 31 04:15:13 crc kubenswrapper[4827]: I0131 04:15:13.979032 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv"] Jan 31 04:15:14 crc kubenswrapper[4827]: I0131 04:15:14.013179 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv" event={"ID":"c286cc5f-448d-49d6-8b78-9d84ba0e11d9","Type":"ContainerStarted","Data":"22f4a6aa5311b93086f237dfcafb958f33775c37aa0e7d19e163466b52e6ca15"} Jan 31 04:15:16 crc kubenswrapper[4827]: I0131 04:15:16.028793 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv" event={"ID":"c286cc5f-448d-49d6-8b78-9d84ba0e11d9","Type":"ContainerStarted","Data":"b8b250c766ccc074ba3af0ed0411667b954aebeb6ec44d60b01987f4100e065d"} Jan 31 04:15:16 crc kubenswrapper[4827]: I0131 04:15:16.049678 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv" podStartSLOduration=1.564851494 podStartE2EDuration="3.04964697s" podCreationTimestamp="2026-01-31 04:15:13 +0000 UTC" firstStartedPulling="2026-01-31 04:15:13.971211253 +0000 UTC m=+1706.658291702" lastFinishedPulling="2026-01-31 04:15:15.456006729 +0000 UTC m=+1708.143087178" observedRunningTime="2026-01-31 04:15:16.04112413 +0000 UTC m=+1708.728204579" watchObservedRunningTime="2026-01-31 04:15:16.04964697 +0000 UTC m=+1708.736727459" Jan 31 04:15:26 crc kubenswrapper[4827]: I0131 04:15:26.110449 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:15:26 crc kubenswrapper[4827]: E0131 04:15:26.111300 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:15:31 crc kubenswrapper[4827]: I0131 04:15:31.044073 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-kk8q2"] Jan 31 04:15:31 crc kubenswrapper[4827]: I0131 04:15:31.062209 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-kk8q2"] Jan 31 04:15:32 crc kubenswrapper[4827]: I0131 04:15:32.131784 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3034593d-68df-4223-a3d5-f1cd46f49398" path="/var/lib/kubelet/pods/3034593d-68df-4223-a3d5-f1cd46f49398/volumes" Jan 31 04:15:34 crc kubenswrapper[4827]: I0131 04:15:34.040868 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-jhfnf"] Jan 31 04:15:34 crc kubenswrapper[4827]: I0131 04:15:34.052468 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-5xrmx"] Jan 31 04:15:34 crc kubenswrapper[4827]: I0131 04:15:34.066904 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-jhfnf"] Jan 31 04:15:34 crc kubenswrapper[4827]: I0131 04:15:34.077406 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-5xrmx"] Jan 31 04:15:34 crc kubenswrapper[4827]: I0131 04:15:34.122662 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="921fb66d-0be5-4614-9974-86da117973d1" path="/var/lib/kubelet/pods/921fb66d-0be5-4614-9974-86da117973d1/volumes" Jan 31 04:15:34 crc kubenswrapper[4827]: I0131 04:15:34.123953 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db80e5df-1238-46c1-b573-55fb8797e379" path="/var/lib/kubelet/pods/db80e5df-1238-46c1-b573-55fb8797e379/volumes" Jan 31 04:15:38 crc kubenswrapper[4827]: I0131 04:15:38.052262 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-d86jl"] Jan 31 04:15:38 crc kubenswrapper[4827]: I0131 04:15:38.065853 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-d86jl"] Jan 31 04:15:38 crc kubenswrapper[4827]: I0131 04:15:38.119182 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2653aa61-3396-42b4-8cfe-ae977242f427" path="/var/lib/kubelet/pods/2653aa61-3396-42b4-8cfe-ae977242f427/volumes" Jan 31 04:15:40 crc kubenswrapper[4827]: I0131 04:15:40.111704 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:15:40 crc kubenswrapper[4827]: E0131 04:15:40.112683 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:15:52 crc kubenswrapper[4827]: I0131 04:15:52.039154 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-48bfh"] Jan 31 04:15:52 crc kubenswrapper[4827]: I0131 04:15:52.058867 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-48bfh"] Jan 31 04:15:52 crc kubenswrapper[4827]: I0131 04:15:52.121560 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da5eeb9-641c-4b43-a3c9-eb4860e9995b" path="/var/lib/kubelet/pods/3da5eeb9-641c-4b43-a3c9-eb4860e9995b/volumes" Jan 31 04:15:55 crc kubenswrapper[4827]: I0131 04:15:55.111175 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:15:55 crc kubenswrapper[4827]: E0131 04:15:55.111975 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:15:59 crc kubenswrapper[4827]: I0131 04:15:59.733220 4827 scope.go:117] "RemoveContainer" containerID="5ecdd62caa924e4695a271469e1bfa0aa564f12c5446805dea04bf66dc454e54" Jan 31 04:15:59 crc kubenswrapper[4827]: I0131 04:15:59.764611 4827 scope.go:117] "RemoveContainer" containerID="2aabf052a649cc78a316179aff19fc9534e6b75802a1af93ea9177f7be484997" Jan 31 04:15:59 crc kubenswrapper[4827]: I0131 04:15:59.816893 4827 scope.go:117] "RemoveContainer" containerID="88885a6baf2794c005ccf9af116f6577e1c6d37522aef85b4eca6b3d3f10179d" Jan 31 04:15:59 crc kubenswrapper[4827]: I0131 04:15:59.837373 4827 scope.go:117] "RemoveContainer" containerID="d390ad5a10b372f9cc082556b2f8259ab04be35df00eb5e029867f257a58ded2" Jan 31 04:15:59 crc kubenswrapper[4827]: I0131 04:15:59.893541 4827 scope.go:117] "RemoveContainer" containerID="f9a3171acb33e15afe734c39f19356645ccfae017fabd868e7280dd5a9ee2e18" Jan 31 04:15:59 crc kubenswrapper[4827]: I0131 04:15:59.927442 4827 scope.go:117] "RemoveContainer" containerID="06beff70b7f8432840557efe615f1d2992319d6afd7b0185b8a5b8fefc27ba3e" Jan 31 04:15:59 crc kubenswrapper[4827]: I0131 04:15:59.969665 4827 scope.go:117] "RemoveContainer" containerID="53c3ff3ff458c19a130e4530e618a2d88b266a4427db74ff67bff15d8710de91" Jan 31 04:15:59 crc kubenswrapper[4827]: I0131 04:15:59.996116 4827 scope.go:117] "RemoveContainer" containerID="5d9b8f59594735a3a990baf70267109a72f74d08587b0f19647c41e1f6f64489" Jan 31 04:16:02 crc kubenswrapper[4827]: I0131 04:16:02.492591 4827 generic.go:334] "Generic (PLEG): container finished" podID="c286cc5f-448d-49d6-8b78-9d84ba0e11d9" containerID="b8b250c766ccc074ba3af0ed0411667b954aebeb6ec44d60b01987f4100e065d" exitCode=0 Jan 31 04:16:02 crc kubenswrapper[4827]: I0131 04:16:02.492722 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv" event={"ID":"c286cc5f-448d-49d6-8b78-9d84ba0e11d9","Type":"ContainerDied","Data":"b8b250c766ccc074ba3af0ed0411667b954aebeb6ec44d60b01987f4100e065d"} Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.014279 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.206706 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c286cc5f-448d-49d6-8b78-9d84ba0e11d9-ssh-key-openstack-edpm-ipam\") pod \"c286cc5f-448d-49d6-8b78-9d84ba0e11d9\" (UID: \"c286cc5f-448d-49d6-8b78-9d84ba0e11d9\") " Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.206926 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c286cc5f-448d-49d6-8b78-9d84ba0e11d9-inventory\") pod \"c286cc5f-448d-49d6-8b78-9d84ba0e11d9\" (UID: \"c286cc5f-448d-49d6-8b78-9d84ba0e11d9\") " Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.207008 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84c24\" (UniqueName: \"kubernetes.io/projected/c286cc5f-448d-49d6-8b78-9d84ba0e11d9-kube-api-access-84c24\") pod \"c286cc5f-448d-49d6-8b78-9d84ba0e11d9\" (UID: \"c286cc5f-448d-49d6-8b78-9d84ba0e11d9\") " Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.227091 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c286cc5f-448d-49d6-8b78-9d84ba0e11d9-kube-api-access-84c24" (OuterVolumeSpecName: "kube-api-access-84c24") pod "c286cc5f-448d-49d6-8b78-9d84ba0e11d9" (UID: "c286cc5f-448d-49d6-8b78-9d84ba0e11d9"). InnerVolumeSpecName "kube-api-access-84c24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.241190 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c286cc5f-448d-49d6-8b78-9d84ba0e11d9-inventory" (OuterVolumeSpecName: "inventory") pod "c286cc5f-448d-49d6-8b78-9d84ba0e11d9" (UID: "c286cc5f-448d-49d6-8b78-9d84ba0e11d9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.249286 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c286cc5f-448d-49d6-8b78-9d84ba0e11d9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c286cc5f-448d-49d6-8b78-9d84ba0e11d9" (UID: "c286cc5f-448d-49d6-8b78-9d84ba0e11d9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.314217 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84c24\" (UniqueName: \"kubernetes.io/projected/c286cc5f-448d-49d6-8b78-9d84ba0e11d9-kube-api-access-84c24\") on node \"crc\" DevicePath \"\"" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.314358 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c286cc5f-448d-49d6-8b78-9d84ba0e11d9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.314385 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c286cc5f-448d-49d6-8b78-9d84ba0e11d9-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.511620 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv" event={"ID":"c286cc5f-448d-49d6-8b78-9d84ba0e11d9","Type":"ContainerDied","Data":"22f4a6aa5311b93086f237dfcafb958f33775c37aa0e7d19e163466b52e6ca15"} Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.511930 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22f4a6aa5311b93086f237dfcafb958f33775c37aa0e7d19e163466b52e6ca15" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.511695 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.597430 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qs99f"] Jan 31 04:16:04 crc kubenswrapper[4827]: E0131 04:16:04.597764 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c286cc5f-448d-49d6-8b78-9d84ba0e11d9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.597783 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c286cc5f-448d-49d6-8b78-9d84ba0e11d9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.597997 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c286cc5f-448d-49d6-8b78-9d84ba0e11d9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.598542 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qs99f" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.600511 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.600579 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.600711 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.601163 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.615293 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qs99f"] Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.719491 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8jvf\" (UniqueName: \"kubernetes.io/projected/b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba-kube-api-access-h8jvf\") pod \"ssh-known-hosts-edpm-deployment-qs99f\" (UID: \"b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba\") " pod="openstack/ssh-known-hosts-edpm-deployment-qs99f" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.719538 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qs99f\" (UID: \"b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba\") " pod="openstack/ssh-known-hosts-edpm-deployment-qs99f" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.719622 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qs99f\" (UID: \"b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba\") " pod="openstack/ssh-known-hosts-edpm-deployment-qs99f" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.820963 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qs99f\" (UID: \"b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba\") " pod="openstack/ssh-known-hosts-edpm-deployment-qs99f" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.821192 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8jvf\" (UniqueName: \"kubernetes.io/projected/b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba-kube-api-access-h8jvf\") pod \"ssh-known-hosts-edpm-deployment-qs99f\" (UID: \"b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba\") " pod="openstack/ssh-known-hosts-edpm-deployment-qs99f" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.821220 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qs99f\" (UID: \"b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba\") " pod="openstack/ssh-known-hosts-edpm-deployment-qs99f" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.826657 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qs99f\" (UID: \"b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba\") " pod="openstack/ssh-known-hosts-edpm-deployment-qs99f" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.830650 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qs99f\" (UID: \"b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba\") " pod="openstack/ssh-known-hosts-edpm-deployment-qs99f" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.852090 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8jvf\" (UniqueName: \"kubernetes.io/projected/b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba-kube-api-access-h8jvf\") pod \"ssh-known-hosts-edpm-deployment-qs99f\" (UID: \"b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba\") " pod="openstack/ssh-known-hosts-edpm-deployment-qs99f" Jan 31 04:16:04 crc kubenswrapper[4827]: I0131 04:16:04.916763 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qs99f" Jan 31 04:16:05 crc kubenswrapper[4827]: I0131 04:16:05.495555 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qs99f"] Jan 31 04:16:05 crc kubenswrapper[4827]: W0131 04:16:05.501400 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb03a5324_2c0f_4d1e_8fda_c00f8bbd04ba.slice/crio-81fea4104f0d3bf4a41a607b636f8544dbfb9a6601558cad838cb47d55f2df86 WatchSource:0}: Error finding container 81fea4104f0d3bf4a41a607b636f8544dbfb9a6601558cad838cb47d55f2df86: Status 404 returned error can't find the container with id 81fea4104f0d3bf4a41a607b636f8544dbfb9a6601558cad838cb47d55f2df86 Jan 31 04:16:05 crc kubenswrapper[4827]: I0131 04:16:05.527864 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qs99f" event={"ID":"b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba","Type":"ContainerStarted","Data":"81fea4104f0d3bf4a41a607b636f8544dbfb9a6601558cad838cb47d55f2df86"} Jan 31 04:16:06 crc kubenswrapper[4827]: I0131 04:16:06.538495 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qs99f" event={"ID":"b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba","Type":"ContainerStarted","Data":"290695e42eb2fb04f86965c41ab2aaa4d98269f44227d33ab1c9a55a56708000"} Jan 31 04:16:06 crc kubenswrapper[4827]: I0131 04:16:06.566170 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-qs99f" podStartSLOduration=1.802462234 podStartE2EDuration="2.566152687s" podCreationTimestamp="2026-01-31 04:16:04 +0000 UTC" firstStartedPulling="2026-01-31 04:16:05.503739976 +0000 UTC m=+1758.190820425" lastFinishedPulling="2026-01-31 04:16:06.267430389 +0000 UTC m=+1758.954510878" observedRunningTime="2026-01-31 04:16:06.562852206 +0000 UTC m=+1759.249932675" watchObservedRunningTime="2026-01-31 04:16:06.566152687 +0000 UTC m=+1759.253233136" Jan 31 04:16:09 crc kubenswrapper[4827]: I0131 04:16:09.109726 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:16:09 crc kubenswrapper[4827]: E0131 04:16:09.110300 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:16:13 crc kubenswrapper[4827]: I0131 04:16:13.620964 4827 generic.go:334] "Generic (PLEG): container finished" podID="b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba" containerID="290695e42eb2fb04f86965c41ab2aaa4d98269f44227d33ab1c9a55a56708000" exitCode=0 Jan 31 04:16:13 crc kubenswrapper[4827]: I0131 04:16:13.620981 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qs99f" event={"ID":"b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba","Type":"ContainerDied","Data":"290695e42eb2fb04f86965c41ab2aaa4d98269f44227d33ab1c9a55a56708000"} Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.018793 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qs99f" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.141911 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba-inventory-0\") pod \"b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba\" (UID: \"b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba\") " Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.142219 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8jvf\" (UniqueName: \"kubernetes.io/projected/b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba-kube-api-access-h8jvf\") pod \"b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba\" (UID: \"b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba\") " Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.142394 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba-ssh-key-openstack-edpm-ipam\") pod \"b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba\" (UID: \"b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba\") " Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.147659 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba-kube-api-access-h8jvf" (OuterVolumeSpecName: "kube-api-access-h8jvf") pod "b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba" (UID: "b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba"). InnerVolumeSpecName "kube-api-access-h8jvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.190681 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba" (UID: "b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.196107 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba" (UID: "b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.246040 4827 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.246075 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8jvf\" (UniqueName: \"kubernetes.io/projected/b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba-kube-api-access-h8jvf\") on node \"crc\" DevicePath \"\"" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.246119 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.642630 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qs99f" event={"ID":"b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba","Type":"ContainerDied","Data":"81fea4104f0d3bf4a41a607b636f8544dbfb9a6601558cad838cb47d55f2df86"} Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.643002 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81fea4104f0d3bf4a41a607b636f8544dbfb9a6601558cad838cb47d55f2df86" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.642703 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qs99f" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.737140 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf"] Jan 31 04:16:15 crc kubenswrapper[4827]: E0131 04:16:15.737536 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba" containerName="ssh-known-hosts-edpm-deployment" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.737556 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba" containerName="ssh-known-hosts-edpm-deployment" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.737773 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba" containerName="ssh-known-hosts-edpm-deployment" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.738455 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.741062 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.741347 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.741448 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.741815 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.766331 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf"] Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.861835 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-m2dkf\" (UID: \"e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.861996 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76k55\" (UniqueName: \"kubernetes.io/projected/e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd-kube-api-access-76k55\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-m2dkf\" (UID: \"e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.862121 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-m2dkf\" (UID: \"e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.963981 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76k55\" (UniqueName: \"kubernetes.io/projected/e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd-kube-api-access-76k55\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-m2dkf\" (UID: \"e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.964289 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-m2dkf\" (UID: \"e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.964435 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-m2dkf\" (UID: \"e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.968571 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-m2dkf\" (UID: \"e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.968755 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-m2dkf\" (UID: \"e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf" Jan 31 04:16:15 crc kubenswrapper[4827]: I0131 04:16:15.986615 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76k55\" (UniqueName: \"kubernetes.io/projected/e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd-kube-api-access-76k55\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-m2dkf\" (UID: \"e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf" Jan 31 04:16:16 crc kubenswrapper[4827]: I0131 04:16:16.066410 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf" Jan 31 04:16:16 crc kubenswrapper[4827]: I0131 04:16:16.560294 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf"] Jan 31 04:16:16 crc kubenswrapper[4827]: W0131 04:16:16.561574 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4aa6c3f_da34_4b8a_93f2_c25c5be32ccd.slice/crio-2a2a746945e4b60b0e1c1ff802d27ce0006ff74284d6334968111200e9ece7c0 WatchSource:0}: Error finding container 2a2a746945e4b60b0e1c1ff802d27ce0006ff74284d6334968111200e9ece7c0: Status 404 returned error can't find the container with id 2a2a746945e4b60b0e1c1ff802d27ce0006ff74284d6334968111200e9ece7c0 Jan 31 04:16:16 crc kubenswrapper[4827]: I0131 04:16:16.652119 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf" event={"ID":"e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd","Type":"ContainerStarted","Data":"2a2a746945e4b60b0e1c1ff802d27ce0006ff74284d6334968111200e9ece7c0"} Jan 31 04:16:17 crc kubenswrapper[4827]: I0131 04:16:17.664690 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf" event={"ID":"e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd","Type":"ContainerStarted","Data":"3607439f34fbcc92f97ca481975b36fee1c15acb07429819608d73a40860225a"} Jan 31 04:16:24 crc kubenswrapper[4827]: I0131 04:16:24.110465 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:16:24 crc kubenswrapper[4827]: E0131 04:16:24.111325 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:16:25 crc kubenswrapper[4827]: I0131 04:16:25.748176 4827 generic.go:334] "Generic (PLEG): container finished" podID="e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd" containerID="3607439f34fbcc92f97ca481975b36fee1c15acb07429819608d73a40860225a" exitCode=0 Jan 31 04:16:25 crc kubenswrapper[4827]: I0131 04:16:25.748373 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf" event={"ID":"e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd","Type":"ContainerDied","Data":"3607439f34fbcc92f97ca481975b36fee1c15acb07429819608d73a40860225a"} Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.164744 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf" Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.206160 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd-ssh-key-openstack-edpm-ipam\") pod \"e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd\" (UID: \"e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd\") " Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.206305 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd-inventory\") pod \"e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd\" (UID: \"e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd\") " Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.206379 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76k55\" (UniqueName: \"kubernetes.io/projected/e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd-kube-api-access-76k55\") pod \"e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd\" (UID: \"e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd\") " Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.214955 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd-kube-api-access-76k55" (OuterVolumeSpecName: "kube-api-access-76k55") pod "e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd" (UID: "e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd"). InnerVolumeSpecName "kube-api-access-76k55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.236834 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd-inventory" (OuterVolumeSpecName: "inventory") pod "e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd" (UID: "e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.244396 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd" (UID: "e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.307550 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.307584 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.307594 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76k55\" (UniqueName: \"kubernetes.io/projected/e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd-kube-api-access-76k55\") on node \"crc\" DevicePath \"\"" Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.769081 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf" event={"ID":"e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd","Type":"ContainerDied","Data":"2a2a746945e4b60b0e1c1ff802d27ce0006ff74284d6334968111200e9ece7c0"} Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.769119 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf" Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.769127 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a2a746945e4b60b0e1c1ff802d27ce0006ff74284d6334968111200e9ece7c0" Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.835057 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54"] Jan 31 04:16:27 crc kubenswrapper[4827]: E0131 04:16:27.835565 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.835597 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.835896 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.836631 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54" Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.839194 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.839536 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.839647 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.840059 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.848968 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54"] Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.918349 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e0924fd-1646-476c-8afa-92e346e5b69c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54\" (UID: \"4e0924fd-1646-476c-8afa-92e346e5b69c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54" Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.918409 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e0924fd-1646-476c-8afa-92e346e5b69c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54\" (UID: \"4e0924fd-1646-476c-8afa-92e346e5b69c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54" Jan 31 04:16:27 crc kubenswrapper[4827]: I0131 04:16:27.918490 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzx8t\" (UniqueName: \"kubernetes.io/projected/4e0924fd-1646-476c-8afa-92e346e5b69c-kube-api-access-bzx8t\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54\" (UID: \"4e0924fd-1646-476c-8afa-92e346e5b69c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54" Jan 31 04:16:28 crc kubenswrapper[4827]: I0131 04:16:28.020333 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e0924fd-1646-476c-8afa-92e346e5b69c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54\" (UID: \"4e0924fd-1646-476c-8afa-92e346e5b69c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54" Jan 31 04:16:28 crc kubenswrapper[4827]: I0131 04:16:28.020621 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e0924fd-1646-476c-8afa-92e346e5b69c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54\" (UID: \"4e0924fd-1646-476c-8afa-92e346e5b69c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54" Jan 31 04:16:28 crc kubenswrapper[4827]: I0131 04:16:28.020699 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzx8t\" (UniqueName: \"kubernetes.io/projected/4e0924fd-1646-476c-8afa-92e346e5b69c-kube-api-access-bzx8t\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54\" (UID: \"4e0924fd-1646-476c-8afa-92e346e5b69c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54" Jan 31 04:16:28 crc kubenswrapper[4827]: I0131 04:16:28.025804 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e0924fd-1646-476c-8afa-92e346e5b69c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54\" (UID: \"4e0924fd-1646-476c-8afa-92e346e5b69c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54" Jan 31 04:16:28 crc kubenswrapper[4827]: I0131 04:16:28.033316 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e0924fd-1646-476c-8afa-92e346e5b69c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54\" (UID: \"4e0924fd-1646-476c-8afa-92e346e5b69c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54" Jan 31 04:16:28 crc kubenswrapper[4827]: I0131 04:16:28.043571 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzx8t\" (UniqueName: \"kubernetes.io/projected/4e0924fd-1646-476c-8afa-92e346e5b69c-kube-api-access-bzx8t\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54\" (UID: \"4e0924fd-1646-476c-8afa-92e346e5b69c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54" Jan 31 04:16:28 crc kubenswrapper[4827]: I0131 04:16:28.219796 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54" Jan 31 04:16:28 crc kubenswrapper[4827]: I0131 04:16:28.755236 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54"] Jan 31 04:16:28 crc kubenswrapper[4827]: I0131 04:16:28.778389 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54" event={"ID":"4e0924fd-1646-476c-8afa-92e346e5b69c","Type":"ContainerStarted","Data":"f88aff6438387430af8d3ab448557aa3c71fbb889ba1e31d25e14efd69854b91"} Jan 31 04:16:29 crc kubenswrapper[4827]: I0131 04:16:29.788033 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54" event={"ID":"4e0924fd-1646-476c-8afa-92e346e5b69c","Type":"ContainerStarted","Data":"04bd6fd9dfda445ea86a47a6f0671230853c0cfabdc00cbcaf995834b0eea223"} Jan 31 04:16:29 crc kubenswrapper[4827]: I0131 04:16:29.809858 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54" podStartSLOduration=2.380480291 podStartE2EDuration="2.809839216s" podCreationTimestamp="2026-01-31 04:16:27 +0000 UTC" firstStartedPulling="2026-01-31 04:16:28.755249464 +0000 UTC m=+1781.442329913" lastFinishedPulling="2026-01-31 04:16:29.184608389 +0000 UTC m=+1781.871688838" observedRunningTime="2026-01-31 04:16:29.808689051 +0000 UTC m=+1782.495769540" watchObservedRunningTime="2026-01-31 04:16:29.809839216 +0000 UTC m=+1782.496919665" Jan 31 04:16:37 crc kubenswrapper[4827]: I0131 04:16:37.110189 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:16:37 crc kubenswrapper[4827]: E0131 04:16:37.111103 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:16:39 crc kubenswrapper[4827]: I0131 04:16:39.881339 4827 generic.go:334] "Generic (PLEG): container finished" podID="4e0924fd-1646-476c-8afa-92e346e5b69c" containerID="04bd6fd9dfda445ea86a47a6f0671230853c0cfabdc00cbcaf995834b0eea223" exitCode=0 Jan 31 04:16:39 crc kubenswrapper[4827]: I0131 04:16:39.881421 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54" event={"ID":"4e0924fd-1646-476c-8afa-92e346e5b69c","Type":"ContainerDied","Data":"04bd6fd9dfda445ea86a47a6f0671230853c0cfabdc00cbcaf995834b0eea223"} Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.058611 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-ckptz"] Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.068561 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-de72-account-create-update-h2jtp"] Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.076130 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-rsvl4"] Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.084472 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-051e-account-create-update-bdjbn"] Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.098920 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-jcb2z"] Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.104166 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-05ff-account-create-update-sgdb6"] Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.111241 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-051e-account-create-update-bdjbn"] Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.117287 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-jcb2z"] Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.124182 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-de72-account-create-update-h2jtp"] Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.130956 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-ckptz"] Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.139268 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-rsvl4"] Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.147913 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-05ff-account-create-update-sgdb6"] Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.309630 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54" Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.392183 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e0924fd-1646-476c-8afa-92e346e5b69c-ssh-key-openstack-edpm-ipam\") pod \"4e0924fd-1646-476c-8afa-92e346e5b69c\" (UID: \"4e0924fd-1646-476c-8afa-92e346e5b69c\") " Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.392352 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzx8t\" (UniqueName: \"kubernetes.io/projected/4e0924fd-1646-476c-8afa-92e346e5b69c-kube-api-access-bzx8t\") pod \"4e0924fd-1646-476c-8afa-92e346e5b69c\" (UID: \"4e0924fd-1646-476c-8afa-92e346e5b69c\") " Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.392506 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e0924fd-1646-476c-8afa-92e346e5b69c-inventory\") pod \"4e0924fd-1646-476c-8afa-92e346e5b69c\" (UID: \"4e0924fd-1646-476c-8afa-92e346e5b69c\") " Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.397943 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e0924fd-1646-476c-8afa-92e346e5b69c-kube-api-access-bzx8t" (OuterVolumeSpecName: "kube-api-access-bzx8t") pod "4e0924fd-1646-476c-8afa-92e346e5b69c" (UID: "4e0924fd-1646-476c-8afa-92e346e5b69c"). InnerVolumeSpecName "kube-api-access-bzx8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.416401 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e0924fd-1646-476c-8afa-92e346e5b69c-inventory" (OuterVolumeSpecName: "inventory") pod "4e0924fd-1646-476c-8afa-92e346e5b69c" (UID: "4e0924fd-1646-476c-8afa-92e346e5b69c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.416825 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e0924fd-1646-476c-8afa-92e346e5b69c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4e0924fd-1646-476c-8afa-92e346e5b69c" (UID: "4e0924fd-1646-476c-8afa-92e346e5b69c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.494393 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e0924fd-1646-476c-8afa-92e346e5b69c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.494429 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzx8t\" (UniqueName: \"kubernetes.io/projected/4e0924fd-1646-476c-8afa-92e346e5b69c-kube-api-access-bzx8t\") on node \"crc\" DevicePath \"\"" Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.494439 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e0924fd-1646-476c-8afa-92e346e5b69c-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.898633 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54" event={"ID":"4e0924fd-1646-476c-8afa-92e346e5b69c","Type":"ContainerDied","Data":"f88aff6438387430af8d3ab448557aa3c71fbb889ba1e31d25e14efd69854b91"} Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.898671 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f88aff6438387430af8d3ab448557aa3c71fbb889ba1e31d25e14efd69854b91" Jan 31 04:16:41 crc kubenswrapper[4827]: I0131 04:16:41.898694 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54" Jan 31 04:16:42 crc kubenswrapper[4827]: I0131 04:16:42.121425 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="313bafd6-600c-448b-aa68-7cf6a5742a68" path="/var/lib/kubelet/pods/313bafd6-600c-448b-aa68-7cf6a5742a68/volumes" Jan 31 04:16:42 crc kubenswrapper[4827]: I0131 04:16:42.122196 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d339ae6-8776-4cfc-8ed1-baea0d422d07" path="/var/lib/kubelet/pods/3d339ae6-8776-4cfc-8ed1-baea0d422d07/volumes" Jan 31 04:16:42 crc kubenswrapper[4827]: I0131 04:16:42.122962 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c685922-00e2-4bd1-91d2-8810a5d45da3" path="/var/lib/kubelet/pods/4c685922-00e2-4bd1-91d2-8810a5d45da3/volumes" Jan 31 04:16:42 crc kubenswrapper[4827]: I0131 04:16:42.123663 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="781f3a29-4f57-4592-b5c7-c492daf58f6c" path="/var/lib/kubelet/pods/781f3a29-4f57-4592-b5c7-c492daf58f6c/volumes" Jan 31 04:16:42 crc kubenswrapper[4827]: I0131 04:16:42.125053 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81308c8d-3b7b-43d2-9364-b8e732430dfd" path="/var/lib/kubelet/pods/81308c8d-3b7b-43d2-9364-b8e732430dfd/volumes" Jan 31 04:16:42 crc kubenswrapper[4827]: I0131 04:16:42.125754 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="965e3f3a-3029-44dc-bdb2-9889b8f90fbe" path="/var/lib/kubelet/pods/965e3f3a-3029-44dc-bdb2-9889b8f90fbe/volumes" Jan 31 04:16:50 crc kubenswrapper[4827]: I0131 04:16:50.110551 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:16:50 crc kubenswrapper[4827]: E0131 04:16:50.111330 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:17:00 crc kubenswrapper[4827]: I0131 04:17:00.160019 4827 scope.go:117] "RemoveContainer" containerID="923d1d9733a967a69a6ab240a89912116fc736c7ddbd55c624ccc26449046231" Jan 31 04:17:00 crc kubenswrapper[4827]: I0131 04:17:00.201956 4827 scope.go:117] "RemoveContainer" containerID="255c20d1b4302c4a2a4b1f51128524c5fc0025055214b94554a7fd5fb0148e46" Jan 31 04:17:00 crc kubenswrapper[4827]: I0131 04:17:00.242615 4827 scope.go:117] "RemoveContainer" containerID="663eb4744b59d4bce6c3dd1d4f8b6c5a46f37a515e993bd92241f912a81aea97" Jan 31 04:17:00 crc kubenswrapper[4827]: I0131 04:17:00.280576 4827 scope.go:117] "RemoveContainer" containerID="4320f6f041a9d9e1712bd7029b4e366165bb1d147029047465b735cd12d5620d" Jan 31 04:17:00 crc kubenswrapper[4827]: I0131 04:17:00.326015 4827 scope.go:117] "RemoveContainer" containerID="a2ee41b03be62056bbfb829764ff685e5d407e417826e887a7f88f044e47856f" Jan 31 04:17:00 crc kubenswrapper[4827]: I0131 04:17:00.378797 4827 scope.go:117] "RemoveContainer" containerID="207a0998d4ef930dda1db4d2f34029681d29c6de17e1e5691dd889131e4ce178" Jan 31 04:17:03 crc kubenswrapper[4827]: I0131 04:17:03.111459 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:17:03 crc kubenswrapper[4827]: E0131 04:17:03.112866 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:17:08 crc kubenswrapper[4827]: I0131 04:17:08.067466 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-n7pzh"] Jan 31 04:17:08 crc kubenswrapper[4827]: I0131 04:17:08.077938 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-n7pzh"] Jan 31 04:17:08 crc kubenswrapper[4827]: I0131 04:17:08.126674 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="570fdf01-16f4-4a1b-91a8-88b5e9447309" path="/var/lib/kubelet/pods/570fdf01-16f4-4a1b-91a8-88b5e9447309/volumes" Jan 31 04:17:17 crc kubenswrapper[4827]: I0131 04:17:17.110216 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:17:17 crc kubenswrapper[4827]: E0131 04:17:17.111061 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:17:29 crc kubenswrapper[4827]: I0131 04:17:29.109844 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:17:29 crc kubenswrapper[4827]: I0131 04:17:29.335952 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"e0cc8c33582f61217ed17d682a2c475099f3ddae73c64c19286eba3c2b49542b"} Jan 31 04:17:30 crc kubenswrapper[4827]: I0131 04:17:30.041039 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-4ggj8"] Jan 31 04:17:30 crc kubenswrapper[4827]: I0131 04:17:30.053294 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-4ggj8"] Jan 31 04:17:30 crc kubenswrapper[4827]: I0131 04:17:30.131166 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb" path="/var/lib/kubelet/pods/d4cc6b8f-4ec0-408e-a494-3e3eb5ca0ecb/volumes" Jan 31 04:17:32 crc kubenswrapper[4827]: I0131 04:17:32.043765 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dhwln"] Jan 31 04:17:32 crc kubenswrapper[4827]: I0131 04:17:32.053466 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dhwln"] Jan 31 04:17:32 crc kubenswrapper[4827]: I0131 04:17:32.120559 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="455ea1a0-7a10-4f2c-ae49-9a52d3e72771" path="/var/lib/kubelet/pods/455ea1a0-7a10-4f2c-ae49-9a52d3e72771/volumes" Jan 31 04:18:00 crc kubenswrapper[4827]: I0131 04:18:00.563811 4827 scope.go:117] "RemoveContainer" containerID="019c6ed39d7653912087b16a1b28c8c5ff694ea01e2675fba3ee41184a2d78bc" Jan 31 04:18:00 crc kubenswrapper[4827]: I0131 04:18:00.606407 4827 scope.go:117] "RemoveContainer" containerID="4b0cd2aec0cce926d5bb6a28f74d048fa11b20b8a36f620f951358ef2a3a7a1f" Jan 31 04:18:00 crc kubenswrapper[4827]: I0131 04:18:00.649420 4827 scope.go:117] "RemoveContainer" containerID="de3cf9d90c91f1d425385ec340296bbab81a49f9c3c6b9735c0e676fe068ac3f" Jan 31 04:18:15 crc kubenswrapper[4827]: I0131 04:18:15.053543 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-8kkgm"] Jan 31 04:18:15 crc kubenswrapper[4827]: I0131 04:18:15.064122 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-8kkgm"] Jan 31 04:18:16 crc kubenswrapper[4827]: I0131 04:18:16.120662 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29a3dce1-3b8e-4c59-b718-5a8e43971938" path="/var/lib/kubelet/pods/29a3dce1-3b8e-4c59-b718-5a8e43971938/volumes" Jan 31 04:19:00 crc kubenswrapper[4827]: I0131 04:19:00.781287 4827 scope.go:117] "RemoveContainer" containerID="d5805197006be6d2bafe7e04d72ed4bbfe45f5595fbd63017fb55eda1f0d817f" Jan 31 04:19:47 crc kubenswrapper[4827]: I0131 04:19:47.371833 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:19:47 crc kubenswrapper[4827]: I0131 04:19:47.372431 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:20:17 crc kubenswrapper[4827]: I0131 04:20:17.371034 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:20:17 crc kubenswrapper[4827]: I0131 04:20:17.371783 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:20:18 crc kubenswrapper[4827]: E0131 04:20:18.355936 4827 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.80:54166->38.102.83.80:42075: write tcp 38.102.83.80:54166->38.102.83.80:42075: write: connection reset by peer Jan 31 04:20:42 crc kubenswrapper[4827]: I0131 04:20:42.394978 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zxmt5"] Jan 31 04:20:42 crc kubenswrapper[4827]: E0131 04:20:42.396044 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0924fd-1646-476c-8afa-92e346e5b69c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:20:42 crc kubenswrapper[4827]: I0131 04:20:42.396061 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0924fd-1646-476c-8afa-92e346e5b69c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:20:42 crc kubenswrapper[4827]: I0131 04:20:42.396294 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e0924fd-1646-476c-8afa-92e346e5b69c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:20:42 crc kubenswrapper[4827]: I0131 04:20:42.397799 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxmt5" Jan 31 04:20:42 crc kubenswrapper[4827]: I0131 04:20:42.402366 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zxmt5"] Jan 31 04:20:42 crc kubenswrapper[4827]: I0131 04:20:42.532686 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c674b79-df74-457b-b678-4eccd7851962-utilities\") pod \"redhat-operators-zxmt5\" (UID: \"5c674b79-df74-457b-b678-4eccd7851962\") " pod="openshift-marketplace/redhat-operators-zxmt5" Jan 31 04:20:42 crc kubenswrapper[4827]: I0131 04:20:42.532767 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2mtk\" (UniqueName: \"kubernetes.io/projected/5c674b79-df74-457b-b678-4eccd7851962-kube-api-access-c2mtk\") pod \"redhat-operators-zxmt5\" (UID: \"5c674b79-df74-457b-b678-4eccd7851962\") " pod="openshift-marketplace/redhat-operators-zxmt5" Jan 31 04:20:42 crc kubenswrapper[4827]: I0131 04:20:42.533386 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c674b79-df74-457b-b678-4eccd7851962-catalog-content\") pod \"redhat-operators-zxmt5\" (UID: \"5c674b79-df74-457b-b678-4eccd7851962\") " pod="openshift-marketplace/redhat-operators-zxmt5" Jan 31 04:20:42 crc kubenswrapper[4827]: I0131 04:20:42.635595 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c674b79-df74-457b-b678-4eccd7851962-utilities\") pod \"redhat-operators-zxmt5\" (UID: \"5c674b79-df74-457b-b678-4eccd7851962\") " pod="openshift-marketplace/redhat-operators-zxmt5" Jan 31 04:20:42 crc kubenswrapper[4827]: I0131 04:20:42.635675 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2mtk\" (UniqueName: \"kubernetes.io/projected/5c674b79-df74-457b-b678-4eccd7851962-kube-api-access-c2mtk\") pod \"redhat-operators-zxmt5\" (UID: \"5c674b79-df74-457b-b678-4eccd7851962\") " pod="openshift-marketplace/redhat-operators-zxmt5" Jan 31 04:20:42 crc kubenswrapper[4827]: I0131 04:20:42.635792 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c674b79-df74-457b-b678-4eccd7851962-catalog-content\") pod \"redhat-operators-zxmt5\" (UID: \"5c674b79-df74-457b-b678-4eccd7851962\") " pod="openshift-marketplace/redhat-operators-zxmt5" Jan 31 04:20:42 crc kubenswrapper[4827]: I0131 04:20:42.636114 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c674b79-df74-457b-b678-4eccd7851962-utilities\") pod \"redhat-operators-zxmt5\" (UID: \"5c674b79-df74-457b-b678-4eccd7851962\") " pod="openshift-marketplace/redhat-operators-zxmt5" Jan 31 04:20:42 crc kubenswrapper[4827]: I0131 04:20:42.636410 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c674b79-df74-457b-b678-4eccd7851962-catalog-content\") pod \"redhat-operators-zxmt5\" (UID: \"5c674b79-df74-457b-b678-4eccd7851962\") " pod="openshift-marketplace/redhat-operators-zxmt5" Jan 31 04:20:42 crc kubenswrapper[4827]: I0131 04:20:42.662254 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2mtk\" (UniqueName: \"kubernetes.io/projected/5c674b79-df74-457b-b678-4eccd7851962-kube-api-access-c2mtk\") pod \"redhat-operators-zxmt5\" (UID: \"5c674b79-df74-457b-b678-4eccd7851962\") " pod="openshift-marketplace/redhat-operators-zxmt5" Jan 31 04:20:42 crc kubenswrapper[4827]: I0131 04:20:42.725728 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxmt5" Jan 31 04:20:43 crc kubenswrapper[4827]: I0131 04:20:43.220053 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zxmt5"] Jan 31 04:20:43 crc kubenswrapper[4827]: I0131 04:20:43.302380 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxmt5" event={"ID":"5c674b79-df74-457b-b678-4eccd7851962","Type":"ContainerStarted","Data":"ea15d59b4f0421a11494ca37bbc7db873855f780bfd798bedd9ed55369b2979c"} Jan 31 04:20:44 crc kubenswrapper[4827]: I0131 04:20:44.310583 4827 generic.go:334] "Generic (PLEG): container finished" podID="5c674b79-df74-457b-b678-4eccd7851962" containerID="02cd3732a97cc4457682a168d252e623285e9a9acc5bc14774a769338c18219d" exitCode=0 Jan 31 04:20:44 crc kubenswrapper[4827]: I0131 04:20:44.310639 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxmt5" event={"ID":"5c674b79-df74-457b-b678-4eccd7851962","Type":"ContainerDied","Data":"02cd3732a97cc4457682a168d252e623285e9a9acc5bc14774a769338c18219d"} Jan 31 04:20:44 crc kubenswrapper[4827]: I0131 04:20:44.312824 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:20:46 crc kubenswrapper[4827]: I0131 04:20:46.333236 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxmt5" event={"ID":"5c674b79-df74-457b-b678-4eccd7851962","Type":"ContainerStarted","Data":"f4e96f36968c614fba0b678e7bfdd591b29cc761ece8827b7a560211fab78054"} Jan 31 04:20:47 crc kubenswrapper[4827]: I0131 04:20:47.371499 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:20:47 crc kubenswrapper[4827]: I0131 04:20:47.371591 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:20:47 crc kubenswrapper[4827]: I0131 04:20:47.371650 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 04:20:47 crc kubenswrapper[4827]: I0131 04:20:47.372328 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e0cc8c33582f61217ed17d682a2c475099f3ddae73c64c19286eba3c2b49542b"} pod="openshift-machine-config-operator/machine-config-daemon-jxh94" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:20:47 crc kubenswrapper[4827]: I0131 04:20:47.372407 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" containerID="cri-o://e0cc8c33582f61217ed17d682a2c475099f3ddae73c64c19286eba3c2b49542b" gracePeriod=600 Jan 31 04:20:48 crc kubenswrapper[4827]: I0131 04:20:48.357403 4827 generic.go:334] "Generic (PLEG): container finished" podID="5c674b79-df74-457b-b678-4eccd7851962" containerID="f4e96f36968c614fba0b678e7bfdd591b29cc761ece8827b7a560211fab78054" exitCode=0 Jan 31 04:20:48 crc kubenswrapper[4827]: I0131 04:20:48.357507 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxmt5" event={"ID":"5c674b79-df74-457b-b678-4eccd7851962","Type":"ContainerDied","Data":"f4e96f36968c614fba0b678e7bfdd591b29cc761ece8827b7a560211fab78054"} Jan 31 04:20:48 crc kubenswrapper[4827]: I0131 04:20:48.369230 4827 generic.go:334] "Generic (PLEG): container finished" podID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerID="e0cc8c33582f61217ed17d682a2c475099f3ddae73c64c19286eba3c2b49542b" exitCode=0 Jan 31 04:20:48 crc kubenswrapper[4827]: I0131 04:20:48.369359 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerDied","Data":"e0cc8c33582f61217ed17d682a2c475099f3ddae73c64c19286eba3c2b49542b"} Jan 31 04:20:48 crc kubenswrapper[4827]: I0131 04:20:48.369441 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39"} Jan 31 04:20:48 crc kubenswrapper[4827]: I0131 04:20:48.369526 4827 scope.go:117] "RemoveContainer" containerID="90034e5c5b64aa9dbc546a2986ac24abb14947ccc2f7663f502fb74fbe06de7f" Jan 31 04:20:49 crc kubenswrapper[4827]: I0131 04:20:49.389175 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxmt5" event={"ID":"5c674b79-df74-457b-b678-4eccd7851962","Type":"ContainerStarted","Data":"109034e3de76e8ded59a06605b6c2029b19a63761ee18bba353cbaa0d5e028f8"} Jan 31 04:20:49 crc kubenswrapper[4827]: I0131 04:20:49.412683 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zxmt5" podStartSLOduration=2.926471754 podStartE2EDuration="7.412664848s" podCreationTimestamp="2026-01-31 04:20:42 +0000 UTC" firstStartedPulling="2026-01-31 04:20:44.312540352 +0000 UTC m=+2036.999620811" lastFinishedPulling="2026-01-31 04:20:48.798733456 +0000 UTC m=+2041.485813905" observedRunningTime="2026-01-31 04:20:49.409795035 +0000 UTC m=+2042.096875494" watchObservedRunningTime="2026-01-31 04:20:49.412664848 +0000 UTC m=+2042.099745307" Jan 31 04:20:52 crc kubenswrapper[4827]: I0131 04:20:52.726483 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zxmt5" Jan 31 04:20:52 crc kubenswrapper[4827]: I0131 04:20:52.727096 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zxmt5" Jan 31 04:20:53 crc kubenswrapper[4827]: I0131 04:20:53.770182 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zxmt5" podUID="5c674b79-df74-457b-b678-4eccd7851962" containerName="registry-server" probeResult="failure" output=< Jan 31 04:20:53 crc kubenswrapper[4827]: timeout: failed to connect service ":50051" within 1s Jan 31 04:20:53 crc kubenswrapper[4827]: > Jan 31 04:20:59 crc kubenswrapper[4827]: I0131 04:20:59.779100 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw"] Jan 31 04:20:59 crc kubenswrapper[4827]: I0131 04:20:59.789165 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv"] Jan 31 04:20:59 crc kubenswrapper[4827]: I0131 04:20:59.798114 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8"] Jan 31 04:20:59 crc kubenswrapper[4827]: I0131 04:20:59.807332 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54"] Jan 31 04:20:59 crc kubenswrapper[4827]: I0131 04:20:59.814093 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j"] Jan 31 04:20:59 crc kubenswrapper[4827]: I0131 04:20:59.821697 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf"] Jan 31 04:20:59 crc kubenswrapper[4827]: I0131 04:20:59.828202 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qs99f"] Jan 31 04:20:59 crc kubenswrapper[4827]: I0131 04:20:59.834248 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl"] Jan 31 04:20:59 crc kubenswrapper[4827]: I0131 04:20:59.845984 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rvfg8"] Jan 31 04:20:59 crc kubenswrapper[4827]: I0131 04:20:59.857680 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m"] Jan 31 04:20:59 crc kubenswrapper[4827]: I0131 04:20:59.865442 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qs99f"] Jan 31 04:20:59 crc kubenswrapper[4827]: I0131 04:20:59.872478 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2dkf"] Jan 31 04:20:59 crc kubenswrapper[4827]: I0131 04:20:59.879017 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt"] Jan 31 04:20:59 crc kubenswrapper[4827]: I0131 04:20:59.884775 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zmk54"] Jan 31 04:20:59 crc kubenswrapper[4827]: I0131 04:20:59.890467 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vtw"] Jan 31 04:20:59 crc kubenswrapper[4827]: I0131 04:20:59.896943 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pht2j"] Jan 31 04:20:59 crc kubenswrapper[4827]: I0131 04:20:59.904370 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2899m"] Jan 31 04:20:59 crc kubenswrapper[4827]: I0131 04:20:59.911302 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-65xhl"] Jan 31 04:20:59 crc kubenswrapper[4827]: I0131 04:20:59.918048 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-92hbv"] Jan 31 04:20:59 crc kubenswrapper[4827]: I0131 04:20:59.925866 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs8lt"] Jan 31 04:21:00 crc kubenswrapper[4827]: I0131 04:21:00.118599 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13ca4fc0-b1d0-439d-917c-ceac9179e614" path="/var/lib/kubelet/pods/13ca4fc0-b1d0-439d-917c-ceac9179e614/volumes" Jan 31 04:21:00 crc kubenswrapper[4827]: I0131 04:21:00.119155 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e0924fd-1646-476c-8afa-92e346e5b69c" path="/var/lib/kubelet/pods/4e0924fd-1646-476c-8afa-92e346e5b69c/volumes" Jan 31 04:21:00 crc kubenswrapper[4827]: I0131 04:21:00.119747 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a5bd016-448a-48cc-bf05-11fe0b0040bc" path="/var/lib/kubelet/pods/9a5bd016-448a-48cc-bf05-11fe0b0040bc/volumes" Jan 31 04:21:00 crc kubenswrapper[4827]: I0131 04:21:00.120366 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba" path="/var/lib/kubelet/pods/b03a5324-2c0f-4d1e-8fda-c00f8bbd04ba/volumes" Jan 31 04:21:00 crc kubenswrapper[4827]: I0131 04:21:00.121376 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b51cb513-8fdc-411c-a9c0-f6065b19ef8d" path="/var/lib/kubelet/pods/b51cb513-8fdc-411c-a9c0-f6065b19ef8d/volumes" Jan 31 04:21:00 crc kubenswrapper[4827]: I0131 04:21:00.121978 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c286cc5f-448d-49d6-8b78-9d84ba0e11d9" path="/var/lib/kubelet/pods/c286cc5f-448d-49d6-8b78-9d84ba0e11d9/volumes" Jan 31 04:21:00 crc kubenswrapper[4827]: I0131 04:21:00.122525 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0b3fbf5-33c0-49e9-9464-d21a13727047" path="/var/lib/kubelet/pods/d0b3fbf5-33c0-49e9-9464-d21a13727047/volumes" Jan 31 04:21:00 crc kubenswrapper[4827]: I0131 04:21:00.123614 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd" path="/var/lib/kubelet/pods/e4aa6c3f-da34-4b8a-93f2-c25c5be32ccd/volumes" Jan 31 04:21:00 crc kubenswrapper[4827]: I0131 04:21:00.124234 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7919b7b-6239-444c-9da1-8dedcee8ca3e" path="/var/lib/kubelet/pods/e7919b7b-6239-444c-9da1-8dedcee8ca3e/volumes" Jan 31 04:21:00 crc kubenswrapper[4827]: I0131 04:21:00.124727 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f21965f4-36e7-4c6b-9377-1da6c40e9b02" path="/var/lib/kubelet/pods/f21965f4-36e7-4c6b-9377-1da6c40e9b02/volumes" Jan 31 04:21:00 crc kubenswrapper[4827]: I0131 04:21:00.897389 4827 scope.go:117] "RemoveContainer" containerID="b00ccd405aad1863b5ca1fd98b7053522b0abaa6e00a4c59460fff53423f1bd1" Jan 31 04:21:00 crc kubenswrapper[4827]: I0131 04:21:00.933680 4827 scope.go:117] "RemoveContainer" containerID="07b106240658fe167ec56c5ef19617568b2c60f29b56621ad79efb986d92b7de" Jan 31 04:21:01 crc kubenswrapper[4827]: I0131 04:21:01.019230 4827 scope.go:117] "RemoveContainer" containerID="942d5457a8b3e345b742d4e0acf06045c9338756c7cf5d7f38960fd1ee91adfa" Jan 31 04:21:01 crc kubenswrapper[4827]: I0131 04:21:01.063353 4827 scope.go:117] "RemoveContainer" containerID="7e00cc5d9e0bdb9ec6999521d593cf739eff05f3086722432c0dd62a12b8210a" Jan 31 04:21:01 crc kubenswrapper[4827]: I0131 04:21:01.104568 4827 scope.go:117] "RemoveContainer" containerID="08e0116fbf3a38e5da7bc6a5299c294add6dec9fa7bc6b56cfad318252f0235a" Jan 31 04:21:02 crc kubenswrapper[4827]: I0131 04:21:02.774748 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zxmt5" Jan 31 04:21:02 crc kubenswrapper[4827]: I0131 04:21:02.835239 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zxmt5" Jan 31 04:21:03 crc kubenswrapper[4827]: I0131 04:21:03.007200 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zxmt5"] Jan 31 04:21:04 crc kubenswrapper[4827]: I0131 04:21:04.534514 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zxmt5" podUID="5c674b79-df74-457b-b678-4eccd7851962" containerName="registry-server" containerID="cri-o://109034e3de76e8ded59a06605b6c2029b19a63761ee18bba353cbaa0d5e028f8" gracePeriod=2 Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.018606 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxmt5" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.084432 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c674b79-df74-457b-b678-4eccd7851962-utilities\") pod \"5c674b79-df74-457b-b678-4eccd7851962\" (UID: \"5c674b79-df74-457b-b678-4eccd7851962\") " Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.084568 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2mtk\" (UniqueName: \"kubernetes.io/projected/5c674b79-df74-457b-b678-4eccd7851962-kube-api-access-c2mtk\") pod \"5c674b79-df74-457b-b678-4eccd7851962\" (UID: \"5c674b79-df74-457b-b678-4eccd7851962\") " Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.085655 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c674b79-df74-457b-b678-4eccd7851962-utilities" (OuterVolumeSpecName: "utilities") pod "5c674b79-df74-457b-b678-4eccd7851962" (UID: "5c674b79-df74-457b-b678-4eccd7851962"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.085789 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c674b79-df74-457b-b678-4eccd7851962-catalog-content\") pod \"5c674b79-df74-457b-b678-4eccd7851962\" (UID: \"5c674b79-df74-457b-b678-4eccd7851962\") " Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.086358 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c674b79-df74-457b-b678-4eccd7851962-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.094372 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c674b79-df74-457b-b678-4eccd7851962-kube-api-access-c2mtk" (OuterVolumeSpecName: "kube-api-access-c2mtk") pod "5c674b79-df74-457b-b678-4eccd7851962" (UID: "5c674b79-df74-457b-b678-4eccd7851962"). InnerVolumeSpecName "kube-api-access-c2mtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.189642 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2mtk\" (UniqueName: \"kubernetes.io/projected/5c674b79-df74-457b-b678-4eccd7851962-kube-api-access-c2mtk\") on node \"crc\" DevicePath \"\"" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.213019 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89"] Jan 31 04:21:05 crc kubenswrapper[4827]: E0131 04:21:05.213345 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c674b79-df74-457b-b678-4eccd7851962" containerName="extract-content" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.213356 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c674b79-df74-457b-b678-4eccd7851962" containerName="extract-content" Jan 31 04:21:05 crc kubenswrapper[4827]: E0131 04:21:05.213368 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c674b79-df74-457b-b678-4eccd7851962" containerName="registry-server" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.213375 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c674b79-df74-457b-b678-4eccd7851962" containerName="registry-server" Jan 31 04:21:05 crc kubenswrapper[4827]: E0131 04:21:05.213397 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c674b79-df74-457b-b678-4eccd7851962" containerName="extract-utilities" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.213403 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c674b79-df74-457b-b678-4eccd7851962" containerName="extract-utilities" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.213570 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c674b79-df74-457b-b678-4eccd7851962" containerName="registry-server" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.216224 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.221055 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.221262 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.221409 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.221547 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.221683 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.231236 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c674b79-df74-457b-b678-4eccd7851962-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c674b79-df74-457b-b678-4eccd7851962" (UID: "5c674b79-df74-457b-b678-4eccd7851962"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.234636 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89"] Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.291526 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scb89\" (UID: \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.291583 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scb89\" (UID: \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.291650 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvhj9\" (UniqueName: \"kubernetes.io/projected/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-kube-api-access-gvhj9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scb89\" (UID: \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.291772 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scb89\" (UID: \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.291827 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scb89\" (UID: \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.291933 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c674b79-df74-457b-b678-4eccd7851962-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.393713 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scb89\" (UID: \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.393799 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scb89\" (UID: \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.394500 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scb89\" (UID: \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.394535 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scb89\" (UID: \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.394563 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvhj9\" (UniqueName: \"kubernetes.io/projected/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-kube-api-access-gvhj9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scb89\" (UID: \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.398027 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scb89\" (UID: \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.400576 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scb89\" (UID: \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.401354 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scb89\" (UID: \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.407045 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scb89\" (UID: \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.410338 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvhj9\" (UniqueName: \"kubernetes.io/projected/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-kube-api-access-gvhj9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scb89\" (UID: \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.540211 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.545941 4827 generic.go:334] "Generic (PLEG): container finished" podID="5c674b79-df74-457b-b678-4eccd7851962" containerID="109034e3de76e8ded59a06605b6c2029b19a63761ee18bba353cbaa0d5e028f8" exitCode=0 Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.546050 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxmt5" event={"ID":"5c674b79-df74-457b-b678-4eccd7851962","Type":"ContainerDied","Data":"109034e3de76e8ded59a06605b6c2029b19a63761ee18bba353cbaa0d5e028f8"} Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.546140 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxmt5" event={"ID":"5c674b79-df74-457b-b678-4eccd7851962","Type":"ContainerDied","Data":"ea15d59b4f0421a11494ca37bbc7db873855f780bfd798bedd9ed55369b2979c"} Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.546209 4827 scope.go:117] "RemoveContainer" containerID="109034e3de76e8ded59a06605b6c2029b19a63761ee18bba353cbaa0d5e028f8" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.546264 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxmt5" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.575036 4827 scope.go:117] "RemoveContainer" containerID="f4e96f36968c614fba0b678e7bfdd591b29cc761ece8827b7a560211fab78054" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.596839 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zxmt5"] Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.604607 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zxmt5"] Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.612530 4827 scope.go:117] "RemoveContainer" containerID="02cd3732a97cc4457682a168d252e623285e9a9acc5bc14774a769338c18219d" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.630975 4827 scope.go:117] "RemoveContainer" containerID="109034e3de76e8ded59a06605b6c2029b19a63761ee18bba353cbaa0d5e028f8" Jan 31 04:21:05 crc kubenswrapper[4827]: E0131 04:21:05.631386 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"109034e3de76e8ded59a06605b6c2029b19a63761ee18bba353cbaa0d5e028f8\": container with ID starting with 109034e3de76e8ded59a06605b6c2029b19a63761ee18bba353cbaa0d5e028f8 not found: ID does not exist" containerID="109034e3de76e8ded59a06605b6c2029b19a63761ee18bba353cbaa0d5e028f8" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.631417 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"109034e3de76e8ded59a06605b6c2029b19a63761ee18bba353cbaa0d5e028f8"} err="failed to get container status \"109034e3de76e8ded59a06605b6c2029b19a63761ee18bba353cbaa0d5e028f8\": rpc error: code = NotFound desc = could not find container \"109034e3de76e8ded59a06605b6c2029b19a63761ee18bba353cbaa0d5e028f8\": container with ID starting with 109034e3de76e8ded59a06605b6c2029b19a63761ee18bba353cbaa0d5e028f8 not found: ID does not exist" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.631435 4827 scope.go:117] "RemoveContainer" containerID="f4e96f36968c614fba0b678e7bfdd591b29cc761ece8827b7a560211fab78054" Jan 31 04:21:05 crc kubenswrapper[4827]: E0131 04:21:05.631772 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4e96f36968c614fba0b678e7bfdd591b29cc761ece8827b7a560211fab78054\": container with ID starting with f4e96f36968c614fba0b678e7bfdd591b29cc761ece8827b7a560211fab78054 not found: ID does not exist" containerID="f4e96f36968c614fba0b678e7bfdd591b29cc761ece8827b7a560211fab78054" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.631791 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4e96f36968c614fba0b678e7bfdd591b29cc761ece8827b7a560211fab78054"} err="failed to get container status \"f4e96f36968c614fba0b678e7bfdd591b29cc761ece8827b7a560211fab78054\": rpc error: code = NotFound desc = could not find container \"f4e96f36968c614fba0b678e7bfdd591b29cc761ece8827b7a560211fab78054\": container with ID starting with f4e96f36968c614fba0b678e7bfdd591b29cc761ece8827b7a560211fab78054 not found: ID does not exist" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.631803 4827 scope.go:117] "RemoveContainer" containerID="02cd3732a97cc4457682a168d252e623285e9a9acc5bc14774a769338c18219d" Jan 31 04:21:05 crc kubenswrapper[4827]: E0131 04:21:05.632067 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02cd3732a97cc4457682a168d252e623285e9a9acc5bc14774a769338c18219d\": container with ID starting with 02cd3732a97cc4457682a168d252e623285e9a9acc5bc14774a769338c18219d not found: ID does not exist" containerID="02cd3732a97cc4457682a168d252e623285e9a9acc5bc14774a769338c18219d" Jan 31 04:21:05 crc kubenswrapper[4827]: I0131 04:21:05.632087 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02cd3732a97cc4457682a168d252e623285e9a9acc5bc14774a769338c18219d"} err="failed to get container status \"02cd3732a97cc4457682a168d252e623285e9a9acc5bc14774a769338c18219d\": rpc error: code = NotFound desc = could not find container \"02cd3732a97cc4457682a168d252e623285e9a9acc5bc14774a769338c18219d\": container with ID starting with 02cd3732a97cc4457682a168d252e623285e9a9acc5bc14774a769338c18219d not found: ID does not exist" Jan 31 04:21:06 crc kubenswrapper[4827]: I0131 04:21:06.072696 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89"] Jan 31 04:21:06 crc kubenswrapper[4827]: I0131 04:21:06.124354 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c674b79-df74-457b-b678-4eccd7851962" path="/var/lib/kubelet/pods/5c674b79-df74-457b-b678-4eccd7851962/volumes" Jan 31 04:21:06 crc kubenswrapper[4827]: I0131 04:21:06.555445 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" event={"ID":"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf","Type":"ContainerStarted","Data":"19e70380e0b77286d3ae2ab5c135e4f9ea93b71080f92c63015146659120f890"} Jan 31 04:21:07 crc kubenswrapper[4827]: I0131 04:21:07.568764 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" event={"ID":"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf","Type":"ContainerStarted","Data":"397f19ac240e026aa26f47a75b28b6404c9dfc8857aada726e1306f41e19c66e"} Jan 31 04:21:07 crc kubenswrapper[4827]: I0131 04:21:07.599412 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" podStartSLOduration=2.118483979 podStartE2EDuration="2.599385851s" podCreationTimestamp="2026-01-31 04:21:05 +0000 UTC" firstStartedPulling="2026-01-31 04:21:06.085143831 +0000 UTC m=+2058.772224280" lastFinishedPulling="2026-01-31 04:21:06.566045703 +0000 UTC m=+2059.253126152" observedRunningTime="2026-01-31 04:21:07.592954173 +0000 UTC m=+2060.280034632" watchObservedRunningTime="2026-01-31 04:21:07.599385851 +0000 UTC m=+2060.286466320" Jan 31 04:21:17 crc kubenswrapper[4827]: I0131 04:21:17.666075 4827 generic.go:334] "Generic (PLEG): container finished" podID="4dec9a4b-08f9-45be-85aa-10bb2a48cdaf" containerID="397f19ac240e026aa26f47a75b28b6404c9dfc8857aada726e1306f41e19c66e" exitCode=0 Jan 31 04:21:17 crc kubenswrapper[4827]: I0131 04:21:17.666169 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" event={"ID":"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf","Type":"ContainerDied","Data":"397f19ac240e026aa26f47a75b28b6404c9dfc8857aada726e1306f41e19c66e"} Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.048248 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.155536 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-ceph\") pod \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\" (UID: \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\") " Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.155603 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-repo-setup-combined-ca-bundle\") pod \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\" (UID: \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\") " Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.155682 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-ssh-key-openstack-edpm-ipam\") pod \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\" (UID: \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\") " Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.155729 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-inventory\") pod \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\" (UID: \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\") " Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.155801 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvhj9\" (UniqueName: \"kubernetes.io/projected/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-kube-api-access-gvhj9\") pod \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\" (UID: \"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf\") " Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.162297 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-ceph" (OuterVolumeSpecName: "ceph") pod "4dec9a4b-08f9-45be-85aa-10bb2a48cdaf" (UID: "4dec9a4b-08f9-45be-85aa-10bb2a48cdaf"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.162364 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "4dec9a4b-08f9-45be-85aa-10bb2a48cdaf" (UID: "4dec9a4b-08f9-45be-85aa-10bb2a48cdaf"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.162923 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-kube-api-access-gvhj9" (OuterVolumeSpecName: "kube-api-access-gvhj9") pod "4dec9a4b-08f9-45be-85aa-10bb2a48cdaf" (UID: "4dec9a4b-08f9-45be-85aa-10bb2a48cdaf"). InnerVolumeSpecName "kube-api-access-gvhj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.190514 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-inventory" (OuterVolumeSpecName: "inventory") pod "4dec9a4b-08f9-45be-85aa-10bb2a48cdaf" (UID: "4dec9a4b-08f9-45be-85aa-10bb2a48cdaf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.197639 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4dec9a4b-08f9-45be-85aa-10bb2a48cdaf" (UID: "4dec9a4b-08f9-45be-85aa-10bb2a48cdaf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.257686 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.257714 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvhj9\" (UniqueName: \"kubernetes.io/projected/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-kube-api-access-gvhj9\") on node \"crc\" DevicePath \"\"" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.257724 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.257734 4827 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.257743 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4dec9a4b-08f9-45be-85aa-10bb2a48cdaf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.683455 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" event={"ID":"4dec9a4b-08f9-45be-85aa-10bb2a48cdaf","Type":"ContainerDied","Data":"19e70380e0b77286d3ae2ab5c135e4f9ea93b71080f92c63015146659120f890"} Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.683534 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19e70380e0b77286d3ae2ab5c135e4f9ea93b71080f92c63015146659120f890" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.683562 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scb89" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.760411 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm"] Jan 31 04:21:19 crc kubenswrapper[4827]: E0131 04:21:19.761048 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dec9a4b-08f9-45be-85aa-10bb2a48cdaf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.761174 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dec9a4b-08f9-45be-85aa-10bb2a48cdaf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.761437 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dec9a4b-08f9-45be-85aa-10bb2a48cdaf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.762459 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.772117 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.772120 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.772664 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.772772 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.773074 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.780090 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm"] Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.867140 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmwzz\" (UniqueName: \"kubernetes.io/projected/fde30814-9dd3-4c47-b7b2-cda3221d27e6-kube-api-access-rmwzz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm\" (UID: \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.867227 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm\" (UID: \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.867276 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm\" (UID: \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.867335 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm\" (UID: \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.867374 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm\" (UID: \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.969078 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmwzz\" (UniqueName: \"kubernetes.io/projected/fde30814-9dd3-4c47-b7b2-cda3221d27e6-kube-api-access-rmwzz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm\" (UID: \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.969150 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm\" (UID: \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.969202 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm\" (UID: \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.969243 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm\" (UID: \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.969285 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm\" (UID: \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.977153 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm\" (UID: \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.982645 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm\" (UID: \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.983408 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm\" (UID: \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.983745 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm\" (UID: \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" Jan 31 04:21:19 crc kubenswrapper[4827]: I0131 04:21:19.993846 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmwzz\" (UniqueName: \"kubernetes.io/projected/fde30814-9dd3-4c47-b7b2-cda3221d27e6-kube-api-access-rmwzz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm\" (UID: \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" Jan 31 04:21:20 crc kubenswrapper[4827]: I0131 04:21:20.087139 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" Jan 31 04:21:20 crc kubenswrapper[4827]: I0131 04:21:20.617305 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm"] Jan 31 04:21:20 crc kubenswrapper[4827]: W0131 04:21:20.627550 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfde30814_9dd3_4c47_b7b2_cda3221d27e6.slice/crio-9a9df3c063de7781b9165404ede6d2258e6f7266b6db00cfc33393c6f036ab9e WatchSource:0}: Error finding container 9a9df3c063de7781b9165404ede6d2258e6f7266b6db00cfc33393c6f036ab9e: Status 404 returned error can't find the container with id 9a9df3c063de7781b9165404ede6d2258e6f7266b6db00cfc33393c6f036ab9e Jan 31 04:21:20 crc kubenswrapper[4827]: I0131 04:21:20.692169 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" event={"ID":"fde30814-9dd3-4c47-b7b2-cda3221d27e6","Type":"ContainerStarted","Data":"9a9df3c063de7781b9165404ede6d2258e6f7266b6db00cfc33393c6f036ab9e"} Jan 31 04:21:21 crc kubenswrapper[4827]: I0131 04:21:21.702676 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" event={"ID":"fde30814-9dd3-4c47-b7b2-cda3221d27e6","Type":"ContainerStarted","Data":"4c9f06792c70f2b8bb601ee855e554967e6206a4cd1f708d78630378f24e3a7b"} Jan 31 04:21:21 crc kubenswrapper[4827]: I0131 04:21:21.735542 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" podStartSLOduration=2.147919208 podStartE2EDuration="2.73551417s" podCreationTimestamp="2026-01-31 04:21:19 +0000 UTC" firstStartedPulling="2026-01-31 04:21:20.6343686 +0000 UTC m=+2073.321449049" lastFinishedPulling="2026-01-31 04:21:21.221963512 +0000 UTC m=+2073.909044011" observedRunningTime="2026-01-31 04:21:21.729273958 +0000 UTC m=+2074.416354467" watchObservedRunningTime="2026-01-31 04:21:21.73551417 +0000 UTC m=+2074.422594659" Jan 31 04:22:01 crc kubenswrapper[4827]: I0131 04:22:01.242353 4827 scope.go:117] "RemoveContainer" containerID="36a8b2041fafff34f60182cd2e917f2ad57711fc0ca2c3993b673c22ae502b01" Jan 31 04:22:01 crc kubenswrapper[4827]: I0131 04:22:01.298539 4827 scope.go:117] "RemoveContainer" containerID="b8b250c766ccc074ba3af0ed0411667b954aebeb6ec44d60b01987f4100e065d" Jan 31 04:22:55 crc kubenswrapper[4827]: I0131 04:22:55.609195 4827 generic.go:334] "Generic (PLEG): container finished" podID="fde30814-9dd3-4c47-b7b2-cda3221d27e6" containerID="4c9f06792c70f2b8bb601ee855e554967e6206a4cd1f708d78630378f24e3a7b" exitCode=0 Jan 31 04:22:55 crc kubenswrapper[4827]: I0131 04:22:55.609317 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" event={"ID":"fde30814-9dd3-4c47-b7b2-cda3221d27e6","Type":"ContainerDied","Data":"4c9f06792c70f2b8bb601ee855e554967e6206a4cd1f708d78630378f24e3a7b"} Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.157032 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.181359 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-ssh-key-openstack-edpm-ipam\") pod \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\" (UID: \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\") " Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.181429 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-bootstrap-combined-ca-bundle\") pod \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\" (UID: \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\") " Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.181565 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-ceph\") pod \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\" (UID: \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\") " Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.181653 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-inventory\") pod \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\" (UID: \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\") " Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.181699 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmwzz\" (UniqueName: \"kubernetes.io/projected/fde30814-9dd3-4c47-b7b2-cda3221d27e6-kube-api-access-rmwzz\") pod \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\" (UID: \"fde30814-9dd3-4c47-b7b2-cda3221d27e6\") " Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.195223 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-ceph" (OuterVolumeSpecName: "ceph") pod "fde30814-9dd3-4c47-b7b2-cda3221d27e6" (UID: "fde30814-9dd3-4c47-b7b2-cda3221d27e6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.195275 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "fde30814-9dd3-4c47-b7b2-cda3221d27e6" (UID: "fde30814-9dd3-4c47-b7b2-cda3221d27e6"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.203189 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde30814-9dd3-4c47-b7b2-cda3221d27e6-kube-api-access-rmwzz" (OuterVolumeSpecName: "kube-api-access-rmwzz") pod "fde30814-9dd3-4c47-b7b2-cda3221d27e6" (UID: "fde30814-9dd3-4c47-b7b2-cda3221d27e6"). InnerVolumeSpecName "kube-api-access-rmwzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.223939 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fde30814-9dd3-4c47-b7b2-cda3221d27e6" (UID: "fde30814-9dd3-4c47-b7b2-cda3221d27e6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.233473 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-inventory" (OuterVolumeSpecName: "inventory") pod "fde30814-9dd3-4c47-b7b2-cda3221d27e6" (UID: "fde30814-9dd3-4c47-b7b2-cda3221d27e6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.283924 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.283960 4827 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.283970 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.283980 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fde30814-9dd3-4c47-b7b2-cda3221d27e6-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.283990 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmwzz\" (UniqueName: \"kubernetes.io/projected/fde30814-9dd3-4c47-b7b2-cda3221d27e6-kube-api-access-rmwzz\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.629757 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" event={"ID":"fde30814-9dd3-4c47-b7b2-cda3221d27e6","Type":"ContainerDied","Data":"9a9df3c063de7781b9165404ede6d2258e6f7266b6db00cfc33393c6f036ab9e"} Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.629797 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a9df3c063de7781b9165404ede6d2258e6f7266b6db00cfc33393c6f036ab9e" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.629862 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.723089 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499"] Jan 31 04:22:57 crc kubenswrapper[4827]: E0131 04:22:57.723861 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde30814-9dd3-4c47-b7b2-cda3221d27e6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.724021 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde30814-9dd3-4c47-b7b2-cda3221d27e6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.724354 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde30814-9dd3-4c47-b7b2-cda3221d27e6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.725130 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.729602 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.729764 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.729793 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.729957 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.729950 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.740302 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499"] Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.793639 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/746484a7-e256-43ec-8a25-6d4ef96aa9e0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dn499\" (UID: \"746484a7-e256-43ec-8a25-6d4ef96aa9e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.793726 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/746484a7-e256-43ec-8a25-6d4ef96aa9e0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dn499\" (UID: \"746484a7-e256-43ec-8a25-6d4ef96aa9e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.793772 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/746484a7-e256-43ec-8a25-6d4ef96aa9e0-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dn499\" (UID: \"746484a7-e256-43ec-8a25-6d4ef96aa9e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.794153 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lr6l\" (UniqueName: \"kubernetes.io/projected/746484a7-e256-43ec-8a25-6d4ef96aa9e0-kube-api-access-6lr6l\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dn499\" (UID: \"746484a7-e256-43ec-8a25-6d4ef96aa9e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.896548 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/746484a7-e256-43ec-8a25-6d4ef96aa9e0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dn499\" (UID: \"746484a7-e256-43ec-8a25-6d4ef96aa9e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.896619 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/746484a7-e256-43ec-8a25-6d4ef96aa9e0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dn499\" (UID: \"746484a7-e256-43ec-8a25-6d4ef96aa9e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.896666 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/746484a7-e256-43ec-8a25-6d4ef96aa9e0-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dn499\" (UID: \"746484a7-e256-43ec-8a25-6d4ef96aa9e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.896743 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lr6l\" (UniqueName: \"kubernetes.io/projected/746484a7-e256-43ec-8a25-6d4ef96aa9e0-kube-api-access-6lr6l\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dn499\" (UID: \"746484a7-e256-43ec-8a25-6d4ef96aa9e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.900291 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/746484a7-e256-43ec-8a25-6d4ef96aa9e0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dn499\" (UID: \"746484a7-e256-43ec-8a25-6d4ef96aa9e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.900905 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/746484a7-e256-43ec-8a25-6d4ef96aa9e0-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dn499\" (UID: \"746484a7-e256-43ec-8a25-6d4ef96aa9e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.913374 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/746484a7-e256-43ec-8a25-6d4ef96aa9e0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dn499\" (UID: \"746484a7-e256-43ec-8a25-6d4ef96aa9e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499" Jan 31 04:22:57 crc kubenswrapper[4827]: I0131 04:22:57.914745 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lr6l\" (UniqueName: \"kubernetes.io/projected/746484a7-e256-43ec-8a25-6d4ef96aa9e0-kube-api-access-6lr6l\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dn499\" (UID: \"746484a7-e256-43ec-8a25-6d4ef96aa9e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499" Jan 31 04:22:58 crc kubenswrapper[4827]: I0131 04:22:58.044050 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499" Jan 31 04:22:58 crc kubenswrapper[4827]: I0131 04:22:58.561676 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499"] Jan 31 04:22:58 crc kubenswrapper[4827]: I0131 04:22:58.641159 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499" event={"ID":"746484a7-e256-43ec-8a25-6d4ef96aa9e0","Type":"ContainerStarted","Data":"90cbd5d8efa2e044c4bdc5175930a8d42a709d168e896794de96bbb05eeecbc7"} Jan 31 04:23:00 crc kubenswrapper[4827]: I0131 04:23:00.660320 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499" event={"ID":"746484a7-e256-43ec-8a25-6d4ef96aa9e0","Type":"ContainerStarted","Data":"0bbb0a3cc579620f99b271fc7eb5495fde65cb23d2143db65a47b4d67fc73593"} Jan 31 04:23:00 crc kubenswrapper[4827]: I0131 04:23:00.682375 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499" podStartSLOduration=2.646904387 podStartE2EDuration="3.682358868s" podCreationTimestamp="2026-01-31 04:22:57 +0000 UTC" firstStartedPulling="2026-01-31 04:22:58.572178833 +0000 UTC m=+2171.259259282" lastFinishedPulling="2026-01-31 04:22:59.607633304 +0000 UTC m=+2172.294713763" observedRunningTime="2026-01-31 04:23:00.678259209 +0000 UTC m=+2173.365339658" watchObservedRunningTime="2026-01-31 04:23:00.682358868 +0000 UTC m=+2173.369439317" Jan 31 04:23:01 crc kubenswrapper[4827]: I0131 04:23:01.418350 4827 scope.go:117] "RemoveContainer" containerID="04bd6fd9dfda445ea86a47a6f0671230853c0cfabdc00cbcaf995834b0eea223" Jan 31 04:23:01 crc kubenswrapper[4827]: I0131 04:23:01.455742 4827 scope.go:117] "RemoveContainer" containerID="3607439f34fbcc92f97ca481975b36fee1c15acb07429819608d73a40860225a" Jan 31 04:23:01 crc kubenswrapper[4827]: I0131 04:23:01.511153 4827 scope.go:117] "RemoveContainer" containerID="290695e42eb2fb04f86965c41ab2aaa4d98269f44227d33ab1c9a55a56708000" Jan 31 04:23:17 crc kubenswrapper[4827]: I0131 04:23:17.371905 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:23:17 crc kubenswrapper[4827]: I0131 04:23:17.372373 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:23:26 crc kubenswrapper[4827]: I0131 04:23:26.912661 4827 generic.go:334] "Generic (PLEG): container finished" podID="746484a7-e256-43ec-8a25-6d4ef96aa9e0" containerID="0bbb0a3cc579620f99b271fc7eb5495fde65cb23d2143db65a47b4d67fc73593" exitCode=0 Jan 31 04:23:26 crc kubenswrapper[4827]: I0131 04:23:26.912710 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499" event={"ID":"746484a7-e256-43ec-8a25-6d4ef96aa9e0","Type":"ContainerDied","Data":"0bbb0a3cc579620f99b271fc7eb5495fde65cb23d2143db65a47b4d67fc73593"} Jan 31 04:23:28 crc kubenswrapper[4827]: I0131 04:23:28.415822 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499" Jan 31 04:23:28 crc kubenswrapper[4827]: I0131 04:23:28.497130 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lr6l\" (UniqueName: \"kubernetes.io/projected/746484a7-e256-43ec-8a25-6d4ef96aa9e0-kube-api-access-6lr6l\") pod \"746484a7-e256-43ec-8a25-6d4ef96aa9e0\" (UID: \"746484a7-e256-43ec-8a25-6d4ef96aa9e0\") " Jan 31 04:23:28 crc kubenswrapper[4827]: I0131 04:23:28.497448 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/746484a7-e256-43ec-8a25-6d4ef96aa9e0-ssh-key-openstack-edpm-ipam\") pod \"746484a7-e256-43ec-8a25-6d4ef96aa9e0\" (UID: \"746484a7-e256-43ec-8a25-6d4ef96aa9e0\") " Jan 31 04:23:28 crc kubenswrapper[4827]: I0131 04:23:28.497559 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/746484a7-e256-43ec-8a25-6d4ef96aa9e0-inventory\") pod \"746484a7-e256-43ec-8a25-6d4ef96aa9e0\" (UID: \"746484a7-e256-43ec-8a25-6d4ef96aa9e0\") " Jan 31 04:23:28 crc kubenswrapper[4827]: I0131 04:23:28.497642 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/746484a7-e256-43ec-8a25-6d4ef96aa9e0-ceph\") pod \"746484a7-e256-43ec-8a25-6d4ef96aa9e0\" (UID: \"746484a7-e256-43ec-8a25-6d4ef96aa9e0\") " Jan 31 04:23:28 crc kubenswrapper[4827]: I0131 04:23:28.503453 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/746484a7-e256-43ec-8a25-6d4ef96aa9e0-kube-api-access-6lr6l" (OuterVolumeSpecName: "kube-api-access-6lr6l") pod "746484a7-e256-43ec-8a25-6d4ef96aa9e0" (UID: "746484a7-e256-43ec-8a25-6d4ef96aa9e0"). InnerVolumeSpecName "kube-api-access-6lr6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:23:28 crc kubenswrapper[4827]: I0131 04:23:28.504325 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/746484a7-e256-43ec-8a25-6d4ef96aa9e0-ceph" (OuterVolumeSpecName: "ceph") pod "746484a7-e256-43ec-8a25-6d4ef96aa9e0" (UID: "746484a7-e256-43ec-8a25-6d4ef96aa9e0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:23:28 crc kubenswrapper[4827]: I0131 04:23:28.525056 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/746484a7-e256-43ec-8a25-6d4ef96aa9e0-inventory" (OuterVolumeSpecName: "inventory") pod "746484a7-e256-43ec-8a25-6d4ef96aa9e0" (UID: "746484a7-e256-43ec-8a25-6d4ef96aa9e0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:23:28 crc kubenswrapper[4827]: I0131 04:23:28.523965 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/746484a7-e256-43ec-8a25-6d4ef96aa9e0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "746484a7-e256-43ec-8a25-6d4ef96aa9e0" (UID: "746484a7-e256-43ec-8a25-6d4ef96aa9e0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:23:28 crc kubenswrapper[4827]: I0131 04:23:28.600767 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/746484a7-e256-43ec-8a25-6d4ef96aa9e0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:23:28 crc kubenswrapper[4827]: I0131 04:23:28.600807 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/746484a7-e256-43ec-8a25-6d4ef96aa9e0-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:23:28 crc kubenswrapper[4827]: I0131 04:23:28.600822 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/746484a7-e256-43ec-8a25-6d4ef96aa9e0-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 04:23:28 crc kubenswrapper[4827]: I0131 04:23:28.600835 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lr6l\" (UniqueName: \"kubernetes.io/projected/746484a7-e256-43ec-8a25-6d4ef96aa9e0-kube-api-access-6lr6l\") on node \"crc\" DevicePath \"\"" Jan 31 04:23:28 crc kubenswrapper[4827]: I0131 04:23:28.939149 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499" event={"ID":"746484a7-e256-43ec-8a25-6d4ef96aa9e0","Type":"ContainerDied","Data":"90cbd5d8efa2e044c4bdc5175930a8d42a709d168e896794de96bbb05eeecbc7"} Jan 31 04:23:28 crc kubenswrapper[4827]: I0131 04:23:28.939220 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90cbd5d8efa2e044c4bdc5175930a8d42a709d168e896794de96bbb05eeecbc7" Jan 31 04:23:28 crc kubenswrapper[4827]: I0131 04:23:28.939245 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dn499" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.057528 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq"] Jan 31 04:23:29 crc kubenswrapper[4827]: E0131 04:23:29.058062 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="746484a7-e256-43ec-8a25-6d4ef96aa9e0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.058090 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="746484a7-e256-43ec-8a25-6d4ef96aa9e0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.058320 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="746484a7-e256-43ec-8a25-6d4ef96aa9e0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.059175 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.061804 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.062310 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.062376 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.063331 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.064595 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.081170 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq"] Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.111137 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60326eb4-1b0c-420c-a0f1-e41d58f386a7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gzznq\" (UID: \"60326eb4-1b0c-420c-a0f1-e41d58f386a7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.111186 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttjmk\" (UniqueName: \"kubernetes.io/projected/60326eb4-1b0c-420c-a0f1-e41d58f386a7-kube-api-access-ttjmk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gzznq\" (UID: \"60326eb4-1b0c-420c-a0f1-e41d58f386a7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.111294 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60326eb4-1b0c-420c-a0f1-e41d58f386a7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gzznq\" (UID: \"60326eb4-1b0c-420c-a0f1-e41d58f386a7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.111431 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60326eb4-1b0c-420c-a0f1-e41d58f386a7-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gzznq\" (UID: \"60326eb4-1b0c-420c-a0f1-e41d58f386a7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.213549 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60326eb4-1b0c-420c-a0f1-e41d58f386a7-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gzznq\" (UID: \"60326eb4-1b0c-420c-a0f1-e41d58f386a7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.213683 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60326eb4-1b0c-420c-a0f1-e41d58f386a7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gzznq\" (UID: \"60326eb4-1b0c-420c-a0f1-e41d58f386a7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.213746 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttjmk\" (UniqueName: \"kubernetes.io/projected/60326eb4-1b0c-420c-a0f1-e41d58f386a7-kube-api-access-ttjmk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gzznq\" (UID: \"60326eb4-1b0c-420c-a0f1-e41d58f386a7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.213990 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60326eb4-1b0c-420c-a0f1-e41d58f386a7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gzznq\" (UID: \"60326eb4-1b0c-420c-a0f1-e41d58f386a7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.219317 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60326eb4-1b0c-420c-a0f1-e41d58f386a7-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gzznq\" (UID: \"60326eb4-1b0c-420c-a0f1-e41d58f386a7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.221105 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60326eb4-1b0c-420c-a0f1-e41d58f386a7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gzznq\" (UID: \"60326eb4-1b0c-420c-a0f1-e41d58f386a7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.225538 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60326eb4-1b0c-420c-a0f1-e41d58f386a7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gzznq\" (UID: \"60326eb4-1b0c-420c-a0f1-e41d58f386a7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.234340 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttjmk\" (UniqueName: \"kubernetes.io/projected/60326eb4-1b0c-420c-a0f1-e41d58f386a7-kube-api-access-ttjmk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gzznq\" (UID: \"60326eb4-1b0c-420c-a0f1-e41d58f386a7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.379033 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq" Jan 31 04:23:29 crc kubenswrapper[4827]: I0131 04:23:29.955617 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq"] Jan 31 04:23:29 crc kubenswrapper[4827]: W0131 04:23:29.971693 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60326eb4_1b0c_420c_a0f1_e41d58f386a7.slice/crio-7abf155daf80b1580e3334e158a2a0761150951bfacd151316b823087d41456d WatchSource:0}: Error finding container 7abf155daf80b1580e3334e158a2a0761150951bfacd151316b823087d41456d: Status 404 returned error can't find the container with id 7abf155daf80b1580e3334e158a2a0761150951bfacd151316b823087d41456d Jan 31 04:23:30 crc kubenswrapper[4827]: I0131 04:23:30.957345 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq" event={"ID":"60326eb4-1b0c-420c-a0f1-e41d58f386a7","Type":"ContainerStarted","Data":"c131367f63b8d6845ee358abfb6cb2189e0f7478ea08dcbc7745106e794534f4"} Jan 31 04:23:30 crc kubenswrapper[4827]: I0131 04:23:30.957920 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq" event={"ID":"60326eb4-1b0c-420c-a0f1-e41d58f386a7","Type":"ContainerStarted","Data":"7abf155daf80b1580e3334e158a2a0761150951bfacd151316b823087d41456d"} Jan 31 04:23:30 crc kubenswrapper[4827]: I0131 04:23:30.985735 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq" podStartSLOduration=1.407525771 podStartE2EDuration="1.985714583s" podCreationTimestamp="2026-01-31 04:23:29 +0000 UTC" firstStartedPulling="2026-01-31 04:23:29.974538067 +0000 UTC m=+2202.661618526" lastFinishedPulling="2026-01-31 04:23:30.552726889 +0000 UTC m=+2203.239807338" observedRunningTime="2026-01-31 04:23:30.974555666 +0000 UTC m=+2203.661636165" watchObservedRunningTime="2026-01-31 04:23:30.985714583 +0000 UTC m=+2203.672795042" Jan 31 04:23:35 crc kubenswrapper[4827]: I0131 04:23:35.999044 4827 generic.go:334] "Generic (PLEG): container finished" podID="60326eb4-1b0c-420c-a0f1-e41d58f386a7" containerID="c131367f63b8d6845ee358abfb6cb2189e0f7478ea08dcbc7745106e794534f4" exitCode=0 Jan 31 04:23:35 crc kubenswrapper[4827]: I0131 04:23:35.999102 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq" event={"ID":"60326eb4-1b0c-420c-a0f1-e41d58f386a7","Type":"ContainerDied","Data":"c131367f63b8d6845ee358abfb6cb2189e0f7478ea08dcbc7745106e794534f4"} Jan 31 04:23:37 crc kubenswrapper[4827]: I0131 04:23:37.416827 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq" Jan 31 04:23:37 crc kubenswrapper[4827]: I0131 04:23:37.569903 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60326eb4-1b0c-420c-a0f1-e41d58f386a7-ceph\") pod \"60326eb4-1b0c-420c-a0f1-e41d58f386a7\" (UID: \"60326eb4-1b0c-420c-a0f1-e41d58f386a7\") " Jan 31 04:23:37 crc kubenswrapper[4827]: I0131 04:23:37.570322 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttjmk\" (UniqueName: \"kubernetes.io/projected/60326eb4-1b0c-420c-a0f1-e41d58f386a7-kube-api-access-ttjmk\") pod \"60326eb4-1b0c-420c-a0f1-e41d58f386a7\" (UID: \"60326eb4-1b0c-420c-a0f1-e41d58f386a7\") " Jan 31 04:23:37 crc kubenswrapper[4827]: I0131 04:23:37.570445 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60326eb4-1b0c-420c-a0f1-e41d58f386a7-inventory\") pod \"60326eb4-1b0c-420c-a0f1-e41d58f386a7\" (UID: \"60326eb4-1b0c-420c-a0f1-e41d58f386a7\") " Jan 31 04:23:37 crc kubenswrapper[4827]: I0131 04:23:37.570468 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60326eb4-1b0c-420c-a0f1-e41d58f386a7-ssh-key-openstack-edpm-ipam\") pod \"60326eb4-1b0c-420c-a0f1-e41d58f386a7\" (UID: \"60326eb4-1b0c-420c-a0f1-e41d58f386a7\") " Jan 31 04:23:37 crc kubenswrapper[4827]: I0131 04:23:37.576453 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60326eb4-1b0c-420c-a0f1-e41d58f386a7-ceph" (OuterVolumeSpecName: "ceph") pod "60326eb4-1b0c-420c-a0f1-e41d58f386a7" (UID: "60326eb4-1b0c-420c-a0f1-e41d58f386a7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:23:37 crc kubenswrapper[4827]: I0131 04:23:37.577081 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60326eb4-1b0c-420c-a0f1-e41d58f386a7-kube-api-access-ttjmk" (OuterVolumeSpecName: "kube-api-access-ttjmk") pod "60326eb4-1b0c-420c-a0f1-e41d58f386a7" (UID: "60326eb4-1b0c-420c-a0f1-e41d58f386a7"). InnerVolumeSpecName "kube-api-access-ttjmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:23:37 crc kubenswrapper[4827]: I0131 04:23:37.597611 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60326eb4-1b0c-420c-a0f1-e41d58f386a7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "60326eb4-1b0c-420c-a0f1-e41d58f386a7" (UID: "60326eb4-1b0c-420c-a0f1-e41d58f386a7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:23:37 crc kubenswrapper[4827]: I0131 04:23:37.600631 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60326eb4-1b0c-420c-a0f1-e41d58f386a7-inventory" (OuterVolumeSpecName: "inventory") pod "60326eb4-1b0c-420c-a0f1-e41d58f386a7" (UID: "60326eb4-1b0c-420c-a0f1-e41d58f386a7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:23:37 crc kubenswrapper[4827]: I0131 04:23:37.672600 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60326eb4-1b0c-420c-a0f1-e41d58f386a7-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:23:37 crc kubenswrapper[4827]: I0131 04:23:37.672645 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60326eb4-1b0c-420c-a0f1-e41d58f386a7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:23:37 crc kubenswrapper[4827]: I0131 04:23:37.672664 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60326eb4-1b0c-420c-a0f1-e41d58f386a7-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 04:23:37 crc kubenswrapper[4827]: I0131 04:23:37.672682 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttjmk\" (UniqueName: \"kubernetes.io/projected/60326eb4-1b0c-420c-a0f1-e41d58f386a7-kube-api-access-ttjmk\") on node \"crc\" DevicePath \"\"" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.017236 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq" event={"ID":"60326eb4-1b0c-420c-a0f1-e41d58f386a7","Type":"ContainerDied","Data":"7abf155daf80b1580e3334e158a2a0761150951bfacd151316b823087d41456d"} Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.017275 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7abf155daf80b1580e3334e158a2a0761150951bfacd151316b823087d41456d" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.017286 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gzznq" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.101158 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd"] Jan 31 04:23:38 crc kubenswrapper[4827]: E0131 04:23:38.102304 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60326eb4-1b0c-420c-a0f1-e41d58f386a7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.102339 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="60326eb4-1b0c-420c-a0f1-e41d58f386a7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.102831 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="60326eb4-1b0c-420c-a0f1-e41d58f386a7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.104275 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.111684 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.111732 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.111939 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.113192 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.120271 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.150316 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd"] Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.282026 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lllwq\" (UniqueName: \"kubernetes.io/projected/83978973-9bf3-4c9a-9689-d47fd0a7aac4-kube-api-access-lllwq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9cnd\" (UID: \"83978973-9bf3-4c9a-9689-d47fd0a7aac4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.282472 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83978973-9bf3-4c9a-9689-d47fd0a7aac4-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9cnd\" (UID: \"83978973-9bf3-4c9a-9689-d47fd0a7aac4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.282800 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83978973-9bf3-4c9a-9689-d47fd0a7aac4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9cnd\" (UID: \"83978973-9bf3-4c9a-9689-d47fd0a7aac4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.283004 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83978973-9bf3-4c9a-9689-d47fd0a7aac4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9cnd\" (UID: \"83978973-9bf3-4c9a-9689-d47fd0a7aac4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.384953 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lllwq\" (UniqueName: \"kubernetes.io/projected/83978973-9bf3-4c9a-9689-d47fd0a7aac4-kube-api-access-lllwq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9cnd\" (UID: \"83978973-9bf3-4c9a-9689-d47fd0a7aac4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.385625 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83978973-9bf3-4c9a-9689-d47fd0a7aac4-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9cnd\" (UID: \"83978973-9bf3-4c9a-9689-d47fd0a7aac4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.385956 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83978973-9bf3-4c9a-9689-d47fd0a7aac4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9cnd\" (UID: \"83978973-9bf3-4c9a-9689-d47fd0a7aac4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.386160 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83978973-9bf3-4c9a-9689-d47fd0a7aac4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9cnd\" (UID: \"83978973-9bf3-4c9a-9689-d47fd0a7aac4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.391325 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83978973-9bf3-4c9a-9689-d47fd0a7aac4-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9cnd\" (UID: \"83978973-9bf3-4c9a-9689-d47fd0a7aac4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.391729 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83978973-9bf3-4c9a-9689-d47fd0a7aac4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9cnd\" (UID: \"83978973-9bf3-4c9a-9689-d47fd0a7aac4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.392489 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83978973-9bf3-4c9a-9689-d47fd0a7aac4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9cnd\" (UID: \"83978973-9bf3-4c9a-9689-d47fd0a7aac4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.404710 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lllwq\" (UniqueName: \"kubernetes.io/projected/83978973-9bf3-4c9a-9689-d47fd0a7aac4-kube-api-access-lllwq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9cnd\" (UID: \"83978973-9bf3-4c9a-9689-d47fd0a7aac4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.426838 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd" Jan 31 04:23:38 crc kubenswrapper[4827]: I0131 04:23:38.966641 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd"] Jan 31 04:23:39 crc kubenswrapper[4827]: I0131 04:23:39.026086 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd" event={"ID":"83978973-9bf3-4c9a-9689-d47fd0a7aac4","Type":"ContainerStarted","Data":"f849dbc538ac5be053bb40a4ceb3e4dd38dbe74ad9be941b1914c0b7c52b71d6"} Jan 31 04:23:40 crc kubenswrapper[4827]: I0131 04:23:40.064481 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd" event={"ID":"83978973-9bf3-4c9a-9689-d47fd0a7aac4","Type":"ContainerStarted","Data":"fc96e4fdb9bace4891e29ce6472c50bf23a0bbb292612ca76a43fd34a7ea7b09"} Jan 31 04:23:40 crc kubenswrapper[4827]: I0131 04:23:40.090213 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd" podStartSLOduration=1.703458989 podStartE2EDuration="2.090191746s" podCreationTimestamp="2026-01-31 04:23:38 +0000 UTC" firstStartedPulling="2026-01-31 04:23:38.978580537 +0000 UTC m=+2211.665660976" lastFinishedPulling="2026-01-31 04:23:39.365313284 +0000 UTC m=+2212.052393733" observedRunningTime="2026-01-31 04:23:40.082430683 +0000 UTC m=+2212.769511132" watchObservedRunningTime="2026-01-31 04:23:40.090191746 +0000 UTC m=+2212.777272205" Jan 31 04:23:47 crc kubenswrapper[4827]: I0131 04:23:47.371171 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:23:47 crc kubenswrapper[4827]: I0131 04:23:47.371852 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:24:07 crc kubenswrapper[4827]: I0131 04:24:07.490161 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c5qpc"] Jan 31 04:24:07 crc kubenswrapper[4827]: I0131 04:24:07.494964 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c5qpc" Jan 31 04:24:07 crc kubenswrapper[4827]: I0131 04:24:07.553378 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c5qpc"] Jan 31 04:24:07 crc kubenswrapper[4827]: I0131 04:24:07.566822 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2whj\" (UniqueName: \"kubernetes.io/projected/c4d4928c-5570-43f5-9ba3-6a993570c318-kube-api-access-w2whj\") pod \"certified-operators-c5qpc\" (UID: \"c4d4928c-5570-43f5-9ba3-6a993570c318\") " pod="openshift-marketplace/certified-operators-c5qpc" Jan 31 04:24:07 crc kubenswrapper[4827]: I0131 04:24:07.567005 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4d4928c-5570-43f5-9ba3-6a993570c318-utilities\") pod \"certified-operators-c5qpc\" (UID: \"c4d4928c-5570-43f5-9ba3-6a993570c318\") " pod="openshift-marketplace/certified-operators-c5qpc" Jan 31 04:24:07 crc kubenswrapper[4827]: I0131 04:24:07.567039 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4d4928c-5570-43f5-9ba3-6a993570c318-catalog-content\") pod \"certified-operators-c5qpc\" (UID: \"c4d4928c-5570-43f5-9ba3-6a993570c318\") " pod="openshift-marketplace/certified-operators-c5qpc" Jan 31 04:24:07 crc kubenswrapper[4827]: I0131 04:24:07.668504 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4d4928c-5570-43f5-9ba3-6a993570c318-utilities\") pod \"certified-operators-c5qpc\" (UID: \"c4d4928c-5570-43f5-9ba3-6a993570c318\") " pod="openshift-marketplace/certified-operators-c5qpc" Jan 31 04:24:07 crc kubenswrapper[4827]: I0131 04:24:07.668558 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4d4928c-5570-43f5-9ba3-6a993570c318-catalog-content\") pod \"certified-operators-c5qpc\" (UID: \"c4d4928c-5570-43f5-9ba3-6a993570c318\") " pod="openshift-marketplace/certified-operators-c5qpc" Jan 31 04:24:07 crc kubenswrapper[4827]: I0131 04:24:07.668662 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2whj\" (UniqueName: \"kubernetes.io/projected/c4d4928c-5570-43f5-9ba3-6a993570c318-kube-api-access-w2whj\") pod \"certified-operators-c5qpc\" (UID: \"c4d4928c-5570-43f5-9ba3-6a993570c318\") " pod="openshift-marketplace/certified-operators-c5qpc" Jan 31 04:24:07 crc kubenswrapper[4827]: I0131 04:24:07.669482 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4d4928c-5570-43f5-9ba3-6a993570c318-utilities\") pod \"certified-operators-c5qpc\" (UID: \"c4d4928c-5570-43f5-9ba3-6a993570c318\") " pod="openshift-marketplace/certified-operators-c5qpc" Jan 31 04:24:07 crc kubenswrapper[4827]: I0131 04:24:07.669554 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4d4928c-5570-43f5-9ba3-6a993570c318-catalog-content\") pod \"certified-operators-c5qpc\" (UID: \"c4d4928c-5570-43f5-9ba3-6a993570c318\") " pod="openshift-marketplace/certified-operators-c5qpc" Jan 31 04:24:07 crc kubenswrapper[4827]: I0131 04:24:07.687368 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2whj\" (UniqueName: \"kubernetes.io/projected/c4d4928c-5570-43f5-9ba3-6a993570c318-kube-api-access-w2whj\") pod \"certified-operators-c5qpc\" (UID: \"c4d4928c-5570-43f5-9ba3-6a993570c318\") " pod="openshift-marketplace/certified-operators-c5qpc" Jan 31 04:24:07 crc kubenswrapper[4827]: I0131 04:24:07.869383 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c5qpc" Jan 31 04:24:08 crc kubenswrapper[4827]: I0131 04:24:08.402442 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c5qpc"] Jan 31 04:24:09 crc kubenswrapper[4827]: I0131 04:24:09.350657 4827 generic.go:334] "Generic (PLEG): container finished" podID="c4d4928c-5570-43f5-9ba3-6a993570c318" containerID="f1aa7566f847353d46f50bfc9733dbcf2d530b17764611ec4c19c925fc1e9fe8" exitCode=0 Jan 31 04:24:09 crc kubenswrapper[4827]: I0131 04:24:09.351076 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5qpc" event={"ID":"c4d4928c-5570-43f5-9ba3-6a993570c318","Type":"ContainerDied","Data":"f1aa7566f847353d46f50bfc9733dbcf2d530b17764611ec4c19c925fc1e9fe8"} Jan 31 04:24:09 crc kubenswrapper[4827]: I0131 04:24:09.351118 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5qpc" event={"ID":"c4d4928c-5570-43f5-9ba3-6a993570c318","Type":"ContainerStarted","Data":"3343d869fec6694cf278ead0d396952f319b49c6f9127cd5129ad02796be7f9e"} Jan 31 04:24:10 crc kubenswrapper[4827]: I0131 04:24:10.360015 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5qpc" event={"ID":"c4d4928c-5570-43f5-9ba3-6a993570c318","Type":"ContainerStarted","Data":"7b81b2ce04245745fbbfebdbf91be054cf12d3b3b7bfcc03d779b55c3c2c32b6"} Jan 31 04:24:11 crc kubenswrapper[4827]: I0131 04:24:11.268576 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t9hqh"] Jan 31 04:24:11 crc kubenswrapper[4827]: I0131 04:24:11.275555 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t9hqh" Jan 31 04:24:11 crc kubenswrapper[4827]: I0131 04:24:11.287146 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t9hqh"] Jan 31 04:24:11 crc kubenswrapper[4827]: I0131 04:24:11.350250 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0085657-a80e-4434-a05f-635b8c5c6317-catalog-content\") pod \"redhat-marketplace-t9hqh\" (UID: \"e0085657-a80e-4434-a05f-635b8c5c6317\") " pod="openshift-marketplace/redhat-marketplace-t9hqh" Jan 31 04:24:11 crc kubenswrapper[4827]: I0131 04:24:11.350319 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhj68\" (UniqueName: \"kubernetes.io/projected/e0085657-a80e-4434-a05f-635b8c5c6317-kube-api-access-lhj68\") pod \"redhat-marketplace-t9hqh\" (UID: \"e0085657-a80e-4434-a05f-635b8c5c6317\") " pod="openshift-marketplace/redhat-marketplace-t9hqh" Jan 31 04:24:11 crc kubenswrapper[4827]: I0131 04:24:11.350387 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0085657-a80e-4434-a05f-635b8c5c6317-utilities\") pod \"redhat-marketplace-t9hqh\" (UID: \"e0085657-a80e-4434-a05f-635b8c5c6317\") " pod="openshift-marketplace/redhat-marketplace-t9hqh" Jan 31 04:24:11 crc kubenswrapper[4827]: I0131 04:24:11.374837 4827 generic.go:334] "Generic (PLEG): container finished" podID="c4d4928c-5570-43f5-9ba3-6a993570c318" containerID="7b81b2ce04245745fbbfebdbf91be054cf12d3b3b7bfcc03d779b55c3c2c32b6" exitCode=0 Jan 31 04:24:11 crc kubenswrapper[4827]: I0131 04:24:11.374929 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5qpc" event={"ID":"c4d4928c-5570-43f5-9ba3-6a993570c318","Type":"ContainerDied","Data":"7b81b2ce04245745fbbfebdbf91be054cf12d3b3b7bfcc03d779b55c3c2c32b6"} Jan 31 04:24:11 crc kubenswrapper[4827]: I0131 04:24:11.451929 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0085657-a80e-4434-a05f-635b8c5c6317-utilities\") pod \"redhat-marketplace-t9hqh\" (UID: \"e0085657-a80e-4434-a05f-635b8c5c6317\") " pod="openshift-marketplace/redhat-marketplace-t9hqh" Jan 31 04:24:11 crc kubenswrapper[4827]: I0131 04:24:11.452108 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0085657-a80e-4434-a05f-635b8c5c6317-catalog-content\") pod \"redhat-marketplace-t9hqh\" (UID: \"e0085657-a80e-4434-a05f-635b8c5c6317\") " pod="openshift-marketplace/redhat-marketplace-t9hqh" Jan 31 04:24:11 crc kubenswrapper[4827]: I0131 04:24:11.452181 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhj68\" (UniqueName: \"kubernetes.io/projected/e0085657-a80e-4434-a05f-635b8c5c6317-kube-api-access-lhj68\") pod \"redhat-marketplace-t9hqh\" (UID: \"e0085657-a80e-4434-a05f-635b8c5c6317\") " pod="openshift-marketplace/redhat-marketplace-t9hqh" Jan 31 04:24:11 crc kubenswrapper[4827]: I0131 04:24:11.452392 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0085657-a80e-4434-a05f-635b8c5c6317-utilities\") pod \"redhat-marketplace-t9hqh\" (UID: \"e0085657-a80e-4434-a05f-635b8c5c6317\") " pod="openshift-marketplace/redhat-marketplace-t9hqh" Jan 31 04:24:11 crc kubenswrapper[4827]: I0131 04:24:11.452988 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0085657-a80e-4434-a05f-635b8c5c6317-catalog-content\") pod \"redhat-marketplace-t9hqh\" (UID: \"e0085657-a80e-4434-a05f-635b8c5c6317\") " pod="openshift-marketplace/redhat-marketplace-t9hqh" Jan 31 04:24:11 crc kubenswrapper[4827]: I0131 04:24:11.474230 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhj68\" (UniqueName: \"kubernetes.io/projected/e0085657-a80e-4434-a05f-635b8c5c6317-kube-api-access-lhj68\") pod \"redhat-marketplace-t9hqh\" (UID: \"e0085657-a80e-4434-a05f-635b8c5c6317\") " pod="openshift-marketplace/redhat-marketplace-t9hqh" Jan 31 04:24:11 crc kubenswrapper[4827]: I0131 04:24:11.607324 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t9hqh" Jan 31 04:24:12 crc kubenswrapper[4827]: I0131 04:24:12.104293 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t9hqh"] Jan 31 04:24:12 crc kubenswrapper[4827]: W0131 04:24:12.124021 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0085657_a80e_4434_a05f_635b8c5c6317.slice/crio-e0f43c783f838efc038f130ec028a95fb45783427e051a17d668f4148dff8dbe WatchSource:0}: Error finding container e0f43c783f838efc038f130ec028a95fb45783427e051a17d668f4148dff8dbe: Status 404 returned error can't find the container with id e0f43c783f838efc038f130ec028a95fb45783427e051a17d668f4148dff8dbe Jan 31 04:24:12 crc kubenswrapper[4827]: I0131 04:24:12.386519 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5qpc" event={"ID":"c4d4928c-5570-43f5-9ba3-6a993570c318","Type":"ContainerStarted","Data":"e4cc522244225f6624c4c52e6b2ce52fda15695081ecb623e5a2a1e2df3e5086"} Jan 31 04:24:12 crc kubenswrapper[4827]: I0131 04:24:12.389839 4827 generic.go:334] "Generic (PLEG): container finished" podID="e0085657-a80e-4434-a05f-635b8c5c6317" containerID="7395e5c61cd92ad1ba1acca8c3dd8e508a7e3408b09d4217b6713cd1dab4a8bd" exitCode=0 Jan 31 04:24:12 crc kubenswrapper[4827]: I0131 04:24:12.389912 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9hqh" event={"ID":"e0085657-a80e-4434-a05f-635b8c5c6317","Type":"ContainerDied","Data":"7395e5c61cd92ad1ba1acca8c3dd8e508a7e3408b09d4217b6713cd1dab4a8bd"} Jan 31 04:24:12 crc kubenswrapper[4827]: I0131 04:24:12.389946 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9hqh" event={"ID":"e0085657-a80e-4434-a05f-635b8c5c6317","Type":"ContainerStarted","Data":"e0f43c783f838efc038f130ec028a95fb45783427e051a17d668f4148dff8dbe"} Jan 31 04:24:12 crc kubenswrapper[4827]: I0131 04:24:12.410075 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c5qpc" podStartSLOduration=2.976691007 podStartE2EDuration="5.410056266s" podCreationTimestamp="2026-01-31 04:24:07 +0000 UTC" firstStartedPulling="2026-01-31 04:24:09.353696625 +0000 UTC m=+2242.040777114" lastFinishedPulling="2026-01-31 04:24:11.787061924 +0000 UTC m=+2244.474142373" observedRunningTime="2026-01-31 04:24:12.405370531 +0000 UTC m=+2245.092450980" watchObservedRunningTime="2026-01-31 04:24:12.410056266 +0000 UTC m=+2245.097136725" Jan 31 04:24:13 crc kubenswrapper[4827]: I0131 04:24:13.403678 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9hqh" event={"ID":"e0085657-a80e-4434-a05f-635b8c5c6317","Type":"ContainerStarted","Data":"a4d32238172f56f5a319463c24156f8e4967d2d8cc0dff6e5122b1cc0d87a32d"} Jan 31 04:24:14 crc kubenswrapper[4827]: I0131 04:24:14.415815 4827 generic.go:334] "Generic (PLEG): container finished" podID="e0085657-a80e-4434-a05f-635b8c5c6317" containerID="a4d32238172f56f5a319463c24156f8e4967d2d8cc0dff6e5122b1cc0d87a32d" exitCode=0 Jan 31 04:24:14 crc kubenswrapper[4827]: I0131 04:24:14.415873 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9hqh" event={"ID":"e0085657-a80e-4434-a05f-635b8c5c6317","Type":"ContainerDied","Data":"a4d32238172f56f5a319463c24156f8e4967d2d8cc0dff6e5122b1cc0d87a32d"} Jan 31 04:24:15 crc kubenswrapper[4827]: I0131 04:24:15.432544 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9hqh" event={"ID":"e0085657-a80e-4434-a05f-635b8c5c6317","Type":"ContainerStarted","Data":"49b3fc034fb35ed7db889fdd5bdedc87d21a62e0310e820406a9497c0b0b2545"} Jan 31 04:24:15 crc kubenswrapper[4827]: I0131 04:24:15.475303 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t9hqh" podStartSLOduration=2.04335275 podStartE2EDuration="4.475274484s" podCreationTimestamp="2026-01-31 04:24:11 +0000 UTC" firstStartedPulling="2026-01-31 04:24:12.392994335 +0000 UTC m=+2245.080074794" lastFinishedPulling="2026-01-31 04:24:14.824916089 +0000 UTC m=+2247.511996528" observedRunningTime="2026-01-31 04:24:15.460761693 +0000 UTC m=+2248.147842182" watchObservedRunningTime="2026-01-31 04:24:15.475274484 +0000 UTC m=+2248.162354963" Jan 31 04:24:17 crc kubenswrapper[4827]: I0131 04:24:17.371023 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:24:17 crc kubenswrapper[4827]: I0131 04:24:17.371768 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:24:17 crc kubenswrapper[4827]: I0131 04:24:17.371836 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 04:24:17 crc kubenswrapper[4827]: I0131 04:24:17.372677 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39"} pod="openshift-machine-config-operator/machine-config-daemon-jxh94" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:24:17 crc kubenswrapper[4827]: I0131 04:24:17.372777 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" containerID="cri-o://86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" gracePeriod=600 Jan 31 04:24:17 crc kubenswrapper[4827]: I0131 04:24:17.453322 4827 generic.go:334] "Generic (PLEG): container finished" podID="83978973-9bf3-4c9a-9689-d47fd0a7aac4" containerID="fc96e4fdb9bace4891e29ce6472c50bf23a0bbb292612ca76a43fd34a7ea7b09" exitCode=0 Jan 31 04:24:17 crc kubenswrapper[4827]: I0131 04:24:17.453363 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd" event={"ID":"83978973-9bf3-4c9a-9689-d47fd0a7aac4","Type":"ContainerDied","Data":"fc96e4fdb9bace4891e29ce6472c50bf23a0bbb292612ca76a43fd34a7ea7b09"} Jan 31 04:24:17 crc kubenswrapper[4827]: E0131 04:24:17.506810 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:24:17 crc kubenswrapper[4827]: I0131 04:24:17.870361 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c5qpc" Jan 31 04:24:17 crc kubenswrapper[4827]: I0131 04:24:17.870459 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c5qpc" Jan 31 04:24:17 crc kubenswrapper[4827]: I0131 04:24:17.955917 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c5qpc" Jan 31 04:24:18 crc kubenswrapper[4827]: I0131 04:24:18.461509 4827 generic.go:334] "Generic (PLEG): container finished" podID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" exitCode=0 Jan 31 04:24:18 crc kubenswrapper[4827]: I0131 04:24:18.461609 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerDied","Data":"86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39"} Jan 31 04:24:18 crc kubenswrapper[4827]: I0131 04:24:18.462946 4827 scope.go:117] "RemoveContainer" containerID="e0cc8c33582f61217ed17d682a2c475099f3ddae73c64c19286eba3c2b49542b" Jan 31 04:24:18 crc kubenswrapper[4827]: I0131 04:24:18.463398 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:24:18 crc kubenswrapper[4827]: E0131 04:24:18.463633 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:24:18 crc kubenswrapper[4827]: I0131 04:24:18.594082 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c5qpc" Jan 31 04:24:18 crc kubenswrapper[4827]: I0131 04:24:18.919119 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.006868 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83978973-9bf3-4c9a-9689-d47fd0a7aac4-ceph\") pod \"83978973-9bf3-4c9a-9689-d47fd0a7aac4\" (UID: \"83978973-9bf3-4c9a-9689-d47fd0a7aac4\") " Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.007065 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83978973-9bf3-4c9a-9689-d47fd0a7aac4-ssh-key-openstack-edpm-ipam\") pod \"83978973-9bf3-4c9a-9689-d47fd0a7aac4\" (UID: \"83978973-9bf3-4c9a-9689-d47fd0a7aac4\") " Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.007143 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83978973-9bf3-4c9a-9689-d47fd0a7aac4-inventory\") pod \"83978973-9bf3-4c9a-9689-d47fd0a7aac4\" (UID: \"83978973-9bf3-4c9a-9689-d47fd0a7aac4\") " Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.007185 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lllwq\" (UniqueName: \"kubernetes.io/projected/83978973-9bf3-4c9a-9689-d47fd0a7aac4-kube-api-access-lllwq\") pod \"83978973-9bf3-4c9a-9689-d47fd0a7aac4\" (UID: \"83978973-9bf3-4c9a-9689-d47fd0a7aac4\") " Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.013387 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83978973-9bf3-4c9a-9689-d47fd0a7aac4-ceph" (OuterVolumeSpecName: "ceph") pod "83978973-9bf3-4c9a-9689-d47fd0a7aac4" (UID: "83978973-9bf3-4c9a-9689-d47fd0a7aac4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.013570 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83978973-9bf3-4c9a-9689-d47fd0a7aac4-kube-api-access-lllwq" (OuterVolumeSpecName: "kube-api-access-lllwq") pod "83978973-9bf3-4c9a-9689-d47fd0a7aac4" (UID: "83978973-9bf3-4c9a-9689-d47fd0a7aac4"). InnerVolumeSpecName "kube-api-access-lllwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.052322 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83978973-9bf3-4c9a-9689-d47fd0a7aac4-inventory" (OuterVolumeSpecName: "inventory") pod "83978973-9bf3-4c9a-9689-d47fd0a7aac4" (UID: "83978973-9bf3-4c9a-9689-d47fd0a7aac4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.056460 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c5qpc"] Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.057873 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83978973-9bf3-4c9a-9689-d47fd0a7aac4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "83978973-9bf3-4c9a-9689-d47fd0a7aac4" (UID: "83978973-9bf3-4c9a-9689-d47fd0a7aac4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.108857 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83978973-9bf3-4c9a-9689-d47fd0a7aac4-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.109017 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lllwq\" (UniqueName: \"kubernetes.io/projected/83978973-9bf3-4c9a-9689-d47fd0a7aac4-kube-api-access-lllwq\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.109085 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83978973-9bf3-4c9a-9689-d47fd0a7aac4-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.109136 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83978973-9bf3-4c9a-9689-d47fd0a7aac4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.472997 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd" event={"ID":"83978973-9bf3-4c9a-9689-d47fd0a7aac4","Type":"ContainerDied","Data":"f849dbc538ac5be053bb40a4ceb3e4dd38dbe74ad9be941b1914c0b7c52b71d6"} Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.473354 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f849dbc538ac5be053bb40a4ceb3e4dd38dbe74ad9be941b1914c0b7c52b71d6" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.473021 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9cnd" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.646384 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9"] Jan 31 04:24:19 crc kubenswrapper[4827]: E0131 04:24:19.646771 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83978973-9bf3-4c9a-9689-d47fd0a7aac4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.646789 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="83978973-9bf3-4c9a-9689-d47fd0a7aac4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.646975 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="83978973-9bf3-4c9a-9689-d47fd0a7aac4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.647591 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.651712 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.651782 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.651843 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.652079 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.652442 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.659280 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9"] Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.721043 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkr58\" (UniqueName: \"kubernetes.io/projected/d1104797-b1ab-4987-9d02-b19197f94eb5-kube-api-access-rkr58\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9\" (UID: \"d1104797-b1ab-4987-9d02-b19197f94eb5\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.721536 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d1104797-b1ab-4987-9d02-b19197f94eb5-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9\" (UID: \"d1104797-b1ab-4987-9d02-b19197f94eb5\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.721648 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1104797-b1ab-4987-9d02-b19197f94eb5-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9\" (UID: \"d1104797-b1ab-4987-9d02-b19197f94eb5\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.721749 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1104797-b1ab-4987-9d02-b19197f94eb5-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9\" (UID: \"d1104797-b1ab-4987-9d02-b19197f94eb5\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.823843 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkr58\" (UniqueName: \"kubernetes.io/projected/d1104797-b1ab-4987-9d02-b19197f94eb5-kube-api-access-rkr58\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9\" (UID: \"d1104797-b1ab-4987-9d02-b19197f94eb5\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.824012 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d1104797-b1ab-4987-9d02-b19197f94eb5-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9\" (UID: \"d1104797-b1ab-4987-9d02-b19197f94eb5\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.824063 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1104797-b1ab-4987-9d02-b19197f94eb5-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9\" (UID: \"d1104797-b1ab-4987-9d02-b19197f94eb5\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.824106 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1104797-b1ab-4987-9d02-b19197f94eb5-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9\" (UID: \"d1104797-b1ab-4987-9d02-b19197f94eb5\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.829468 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d1104797-b1ab-4987-9d02-b19197f94eb5-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9\" (UID: \"d1104797-b1ab-4987-9d02-b19197f94eb5\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.829494 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1104797-b1ab-4987-9d02-b19197f94eb5-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9\" (UID: \"d1104797-b1ab-4987-9d02-b19197f94eb5\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.830425 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1104797-b1ab-4987-9d02-b19197f94eb5-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9\" (UID: \"d1104797-b1ab-4987-9d02-b19197f94eb5\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9" Jan 31 04:24:19 crc kubenswrapper[4827]: I0131 04:24:19.847098 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkr58\" (UniqueName: \"kubernetes.io/projected/d1104797-b1ab-4987-9d02-b19197f94eb5-kube-api-access-rkr58\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9\" (UID: \"d1104797-b1ab-4987-9d02-b19197f94eb5\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9" Jan 31 04:24:20 crc kubenswrapper[4827]: I0131 04:24:20.000331 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9" Jan 31 04:24:20 crc kubenswrapper[4827]: I0131 04:24:20.478577 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c5qpc" podUID="c4d4928c-5570-43f5-9ba3-6a993570c318" containerName="registry-server" containerID="cri-o://e4cc522244225f6624c4c52e6b2ce52fda15695081ecb623e5a2a1e2df3e5086" gracePeriod=2 Jan 31 04:24:20 crc kubenswrapper[4827]: I0131 04:24:20.546117 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9"] Jan 31 04:24:20 crc kubenswrapper[4827]: I0131 04:24:20.911567 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c5qpc" Jan 31 04:24:20 crc kubenswrapper[4827]: I0131 04:24:20.946765 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4d4928c-5570-43f5-9ba3-6a993570c318-utilities\") pod \"c4d4928c-5570-43f5-9ba3-6a993570c318\" (UID: \"c4d4928c-5570-43f5-9ba3-6a993570c318\") " Jan 31 04:24:20 crc kubenswrapper[4827]: I0131 04:24:20.947011 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2whj\" (UniqueName: \"kubernetes.io/projected/c4d4928c-5570-43f5-9ba3-6a993570c318-kube-api-access-w2whj\") pod \"c4d4928c-5570-43f5-9ba3-6a993570c318\" (UID: \"c4d4928c-5570-43f5-9ba3-6a993570c318\") " Jan 31 04:24:20 crc kubenswrapper[4827]: I0131 04:24:20.947067 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4d4928c-5570-43f5-9ba3-6a993570c318-catalog-content\") pod \"c4d4928c-5570-43f5-9ba3-6a993570c318\" (UID: \"c4d4928c-5570-43f5-9ba3-6a993570c318\") " Jan 31 04:24:20 crc kubenswrapper[4827]: I0131 04:24:20.947406 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4d4928c-5570-43f5-9ba3-6a993570c318-utilities" (OuterVolumeSpecName: "utilities") pod "c4d4928c-5570-43f5-9ba3-6a993570c318" (UID: "c4d4928c-5570-43f5-9ba3-6a993570c318"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4827]: I0131 04:24:20.947673 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4d4928c-5570-43f5-9ba3-6a993570c318-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4827]: I0131 04:24:20.958104 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4d4928c-5570-43f5-9ba3-6a993570c318-kube-api-access-w2whj" (OuterVolumeSpecName: "kube-api-access-w2whj") pod "c4d4928c-5570-43f5-9ba3-6a993570c318" (UID: "c4d4928c-5570-43f5-9ba3-6a993570c318"). InnerVolumeSpecName "kube-api-access-w2whj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.006230 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4d4928c-5570-43f5-9ba3-6a993570c318-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4d4928c-5570-43f5-9ba3-6a993570c318" (UID: "c4d4928c-5570-43f5-9ba3-6a993570c318"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.049818 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2whj\" (UniqueName: \"kubernetes.io/projected/c4d4928c-5570-43f5-9ba3-6a993570c318-kube-api-access-w2whj\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.049852 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4d4928c-5570-43f5-9ba3-6a993570c318-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.486809 4827 generic.go:334] "Generic (PLEG): container finished" podID="c4d4928c-5570-43f5-9ba3-6a993570c318" containerID="e4cc522244225f6624c4c52e6b2ce52fda15695081ecb623e5a2a1e2df3e5086" exitCode=0 Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.486848 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5qpc" event={"ID":"c4d4928c-5570-43f5-9ba3-6a993570c318","Type":"ContainerDied","Data":"e4cc522244225f6624c4c52e6b2ce52fda15695081ecb623e5a2a1e2df3e5086"} Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.487286 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5qpc" event={"ID":"c4d4928c-5570-43f5-9ba3-6a993570c318","Type":"ContainerDied","Data":"3343d869fec6694cf278ead0d396952f319b49c6f9127cd5129ad02796be7f9e"} Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.487305 4827 scope.go:117] "RemoveContainer" containerID="e4cc522244225f6624c4c52e6b2ce52fda15695081ecb623e5a2a1e2df3e5086" Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.486891 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c5qpc" Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.488755 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9" event={"ID":"d1104797-b1ab-4987-9d02-b19197f94eb5","Type":"ContainerStarted","Data":"c2048cb86a3d938b3f60be4b8b3f76d606d15c9fb4810a259bbbeee3ace9c803"} Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.488809 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9" event={"ID":"d1104797-b1ab-4987-9d02-b19197f94eb5","Type":"ContainerStarted","Data":"6846a9686d804fec4d775cd80f17f528d949c50caa64404e4703a36ebc9e116e"} Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.507895 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9" podStartSLOduration=2.071942852 podStartE2EDuration="2.507852602s" podCreationTimestamp="2026-01-31 04:24:19 +0000 UTC" firstStartedPulling="2026-01-31 04:24:20.600984111 +0000 UTC m=+2253.288064560" lastFinishedPulling="2026-01-31 04:24:21.036893861 +0000 UTC m=+2253.723974310" observedRunningTime="2026-01-31 04:24:21.50526016 +0000 UTC m=+2254.192340629" watchObservedRunningTime="2026-01-31 04:24:21.507852602 +0000 UTC m=+2254.194933051" Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.511533 4827 scope.go:117] "RemoveContainer" containerID="7b81b2ce04245745fbbfebdbf91be054cf12d3b3b7bfcc03d779b55c3c2c32b6" Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.532245 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c5qpc"] Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.548660 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c5qpc"] Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.553482 4827 scope.go:117] "RemoveContainer" containerID="f1aa7566f847353d46f50bfc9733dbcf2d530b17764611ec4c19c925fc1e9fe8" Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.574382 4827 scope.go:117] "RemoveContainer" containerID="e4cc522244225f6624c4c52e6b2ce52fda15695081ecb623e5a2a1e2df3e5086" Jan 31 04:24:21 crc kubenswrapper[4827]: E0131 04:24:21.574834 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4cc522244225f6624c4c52e6b2ce52fda15695081ecb623e5a2a1e2df3e5086\": container with ID starting with e4cc522244225f6624c4c52e6b2ce52fda15695081ecb623e5a2a1e2df3e5086 not found: ID does not exist" containerID="e4cc522244225f6624c4c52e6b2ce52fda15695081ecb623e5a2a1e2df3e5086" Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.574896 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4cc522244225f6624c4c52e6b2ce52fda15695081ecb623e5a2a1e2df3e5086"} err="failed to get container status \"e4cc522244225f6624c4c52e6b2ce52fda15695081ecb623e5a2a1e2df3e5086\": rpc error: code = NotFound desc = could not find container \"e4cc522244225f6624c4c52e6b2ce52fda15695081ecb623e5a2a1e2df3e5086\": container with ID starting with e4cc522244225f6624c4c52e6b2ce52fda15695081ecb623e5a2a1e2df3e5086 not found: ID does not exist" Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.574925 4827 scope.go:117] "RemoveContainer" containerID="7b81b2ce04245745fbbfebdbf91be054cf12d3b3b7bfcc03d779b55c3c2c32b6" Jan 31 04:24:21 crc kubenswrapper[4827]: E0131 04:24:21.575204 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b81b2ce04245745fbbfebdbf91be054cf12d3b3b7bfcc03d779b55c3c2c32b6\": container with ID starting with 7b81b2ce04245745fbbfebdbf91be054cf12d3b3b7bfcc03d779b55c3c2c32b6 not found: ID does not exist" containerID="7b81b2ce04245745fbbfebdbf91be054cf12d3b3b7bfcc03d779b55c3c2c32b6" Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.575236 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b81b2ce04245745fbbfebdbf91be054cf12d3b3b7bfcc03d779b55c3c2c32b6"} err="failed to get container status \"7b81b2ce04245745fbbfebdbf91be054cf12d3b3b7bfcc03d779b55c3c2c32b6\": rpc error: code = NotFound desc = could not find container \"7b81b2ce04245745fbbfebdbf91be054cf12d3b3b7bfcc03d779b55c3c2c32b6\": container with ID starting with 7b81b2ce04245745fbbfebdbf91be054cf12d3b3b7bfcc03d779b55c3c2c32b6 not found: ID does not exist" Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.575255 4827 scope.go:117] "RemoveContainer" containerID="f1aa7566f847353d46f50bfc9733dbcf2d530b17764611ec4c19c925fc1e9fe8" Jan 31 04:24:21 crc kubenswrapper[4827]: E0131 04:24:21.575501 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1aa7566f847353d46f50bfc9733dbcf2d530b17764611ec4c19c925fc1e9fe8\": container with ID starting with f1aa7566f847353d46f50bfc9733dbcf2d530b17764611ec4c19c925fc1e9fe8 not found: ID does not exist" containerID="f1aa7566f847353d46f50bfc9733dbcf2d530b17764611ec4c19c925fc1e9fe8" Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.575527 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1aa7566f847353d46f50bfc9733dbcf2d530b17764611ec4c19c925fc1e9fe8"} err="failed to get container status \"f1aa7566f847353d46f50bfc9733dbcf2d530b17764611ec4c19c925fc1e9fe8\": rpc error: code = NotFound desc = could not find container \"f1aa7566f847353d46f50bfc9733dbcf2d530b17764611ec4c19c925fc1e9fe8\": container with ID starting with f1aa7566f847353d46f50bfc9733dbcf2d530b17764611ec4c19c925fc1e9fe8 not found: ID does not exist" Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.608030 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t9hqh" Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.608097 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t9hqh" Jan 31 04:24:21 crc kubenswrapper[4827]: I0131 04:24:21.673411 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t9hqh" Jan 31 04:24:22 crc kubenswrapper[4827]: I0131 04:24:22.118426 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4d4928c-5570-43f5-9ba3-6a993570c318" path="/var/lib/kubelet/pods/c4d4928c-5570-43f5-9ba3-6a993570c318/volumes" Jan 31 04:24:22 crc kubenswrapper[4827]: I0131 04:24:22.566954 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t9hqh" Jan 31 04:24:24 crc kubenswrapper[4827]: I0131 04:24:24.056680 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t9hqh"] Jan 31 04:24:24 crc kubenswrapper[4827]: I0131 04:24:24.529745 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t9hqh" podUID="e0085657-a80e-4434-a05f-635b8c5c6317" containerName="registry-server" containerID="cri-o://49b3fc034fb35ed7db889fdd5bdedc87d21a62e0310e820406a9497c0b0b2545" gracePeriod=2 Jan 31 04:24:24 crc kubenswrapper[4827]: I0131 04:24:24.990676 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t9hqh" Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.019204 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhj68\" (UniqueName: \"kubernetes.io/projected/e0085657-a80e-4434-a05f-635b8c5c6317-kube-api-access-lhj68\") pod \"e0085657-a80e-4434-a05f-635b8c5c6317\" (UID: \"e0085657-a80e-4434-a05f-635b8c5c6317\") " Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.019378 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0085657-a80e-4434-a05f-635b8c5c6317-catalog-content\") pod \"e0085657-a80e-4434-a05f-635b8c5c6317\" (UID: \"e0085657-a80e-4434-a05f-635b8c5c6317\") " Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.019452 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0085657-a80e-4434-a05f-635b8c5c6317-utilities\") pod \"e0085657-a80e-4434-a05f-635b8c5c6317\" (UID: \"e0085657-a80e-4434-a05f-635b8c5c6317\") " Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.020317 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0085657-a80e-4434-a05f-635b8c5c6317-utilities" (OuterVolumeSpecName: "utilities") pod "e0085657-a80e-4434-a05f-635b8c5c6317" (UID: "e0085657-a80e-4434-a05f-635b8c5c6317"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.026089 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0085657-a80e-4434-a05f-635b8c5c6317-kube-api-access-lhj68" (OuterVolumeSpecName: "kube-api-access-lhj68") pod "e0085657-a80e-4434-a05f-635b8c5c6317" (UID: "e0085657-a80e-4434-a05f-635b8c5c6317"). InnerVolumeSpecName "kube-api-access-lhj68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.066369 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0085657-a80e-4434-a05f-635b8c5c6317-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0085657-a80e-4434-a05f-635b8c5c6317" (UID: "e0085657-a80e-4434-a05f-635b8c5c6317"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.122301 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0085657-a80e-4434-a05f-635b8c5c6317-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.122624 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0085657-a80e-4434-a05f-635b8c5c6317-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.122645 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhj68\" (UniqueName: \"kubernetes.io/projected/e0085657-a80e-4434-a05f-635b8c5c6317-kube-api-access-lhj68\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.549451 4827 generic.go:334] "Generic (PLEG): container finished" podID="e0085657-a80e-4434-a05f-635b8c5c6317" containerID="49b3fc034fb35ed7db889fdd5bdedc87d21a62e0310e820406a9497c0b0b2545" exitCode=0 Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.549609 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9hqh" event={"ID":"e0085657-a80e-4434-a05f-635b8c5c6317","Type":"ContainerDied","Data":"49b3fc034fb35ed7db889fdd5bdedc87d21a62e0310e820406a9497c0b0b2545"} Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.549657 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9hqh" event={"ID":"e0085657-a80e-4434-a05f-635b8c5c6317","Type":"ContainerDied","Data":"e0f43c783f838efc038f130ec028a95fb45783427e051a17d668f4148dff8dbe"} Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.549663 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t9hqh" Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.549705 4827 scope.go:117] "RemoveContainer" containerID="49b3fc034fb35ed7db889fdd5bdedc87d21a62e0310e820406a9497c0b0b2545" Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.554911 4827 generic.go:334] "Generic (PLEG): container finished" podID="d1104797-b1ab-4987-9d02-b19197f94eb5" containerID="c2048cb86a3d938b3f60be4b8b3f76d606d15c9fb4810a259bbbeee3ace9c803" exitCode=0 Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.554965 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9" event={"ID":"d1104797-b1ab-4987-9d02-b19197f94eb5","Type":"ContainerDied","Data":"c2048cb86a3d938b3f60be4b8b3f76d606d15c9fb4810a259bbbeee3ace9c803"} Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.584733 4827 scope.go:117] "RemoveContainer" containerID="a4d32238172f56f5a319463c24156f8e4967d2d8cc0dff6e5122b1cc0d87a32d" Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.618202 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t9hqh"] Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.631247 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t9hqh"] Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.636793 4827 scope.go:117] "RemoveContainer" containerID="7395e5c61cd92ad1ba1acca8c3dd8e508a7e3408b09d4217b6713cd1dab4a8bd" Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.678793 4827 scope.go:117] "RemoveContainer" containerID="49b3fc034fb35ed7db889fdd5bdedc87d21a62e0310e820406a9497c0b0b2545" Jan 31 04:24:25 crc kubenswrapper[4827]: E0131 04:24:25.679527 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49b3fc034fb35ed7db889fdd5bdedc87d21a62e0310e820406a9497c0b0b2545\": container with ID starting with 49b3fc034fb35ed7db889fdd5bdedc87d21a62e0310e820406a9497c0b0b2545 not found: ID does not exist" containerID="49b3fc034fb35ed7db889fdd5bdedc87d21a62e0310e820406a9497c0b0b2545" Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.679598 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49b3fc034fb35ed7db889fdd5bdedc87d21a62e0310e820406a9497c0b0b2545"} err="failed to get container status \"49b3fc034fb35ed7db889fdd5bdedc87d21a62e0310e820406a9497c0b0b2545\": rpc error: code = NotFound desc = could not find container \"49b3fc034fb35ed7db889fdd5bdedc87d21a62e0310e820406a9497c0b0b2545\": container with ID starting with 49b3fc034fb35ed7db889fdd5bdedc87d21a62e0310e820406a9497c0b0b2545 not found: ID does not exist" Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.679641 4827 scope.go:117] "RemoveContainer" containerID="a4d32238172f56f5a319463c24156f8e4967d2d8cc0dff6e5122b1cc0d87a32d" Jan 31 04:24:25 crc kubenswrapper[4827]: E0131 04:24:25.680307 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4d32238172f56f5a319463c24156f8e4967d2d8cc0dff6e5122b1cc0d87a32d\": container with ID starting with a4d32238172f56f5a319463c24156f8e4967d2d8cc0dff6e5122b1cc0d87a32d not found: ID does not exist" containerID="a4d32238172f56f5a319463c24156f8e4967d2d8cc0dff6e5122b1cc0d87a32d" Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.680350 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4d32238172f56f5a319463c24156f8e4967d2d8cc0dff6e5122b1cc0d87a32d"} err="failed to get container status \"a4d32238172f56f5a319463c24156f8e4967d2d8cc0dff6e5122b1cc0d87a32d\": rpc error: code = NotFound desc = could not find container \"a4d32238172f56f5a319463c24156f8e4967d2d8cc0dff6e5122b1cc0d87a32d\": container with ID starting with a4d32238172f56f5a319463c24156f8e4967d2d8cc0dff6e5122b1cc0d87a32d not found: ID does not exist" Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.680378 4827 scope.go:117] "RemoveContainer" containerID="7395e5c61cd92ad1ba1acca8c3dd8e508a7e3408b09d4217b6713cd1dab4a8bd" Jan 31 04:24:25 crc kubenswrapper[4827]: E0131 04:24:25.680802 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7395e5c61cd92ad1ba1acca8c3dd8e508a7e3408b09d4217b6713cd1dab4a8bd\": container with ID starting with 7395e5c61cd92ad1ba1acca8c3dd8e508a7e3408b09d4217b6713cd1dab4a8bd not found: ID does not exist" containerID="7395e5c61cd92ad1ba1acca8c3dd8e508a7e3408b09d4217b6713cd1dab4a8bd" Jan 31 04:24:25 crc kubenswrapper[4827]: I0131 04:24:25.680826 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7395e5c61cd92ad1ba1acca8c3dd8e508a7e3408b09d4217b6713cd1dab4a8bd"} err="failed to get container status \"7395e5c61cd92ad1ba1acca8c3dd8e508a7e3408b09d4217b6713cd1dab4a8bd\": rpc error: code = NotFound desc = could not find container \"7395e5c61cd92ad1ba1acca8c3dd8e508a7e3408b09d4217b6713cd1dab4a8bd\": container with ID starting with 7395e5c61cd92ad1ba1acca8c3dd8e508a7e3408b09d4217b6713cd1dab4a8bd not found: ID does not exist" Jan 31 04:24:26 crc kubenswrapper[4827]: I0131 04:24:26.123437 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0085657-a80e-4434-a05f-635b8c5c6317" path="/var/lib/kubelet/pods/e0085657-a80e-4434-a05f-635b8c5c6317/volumes" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.043683 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.061964 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1104797-b1ab-4987-9d02-b19197f94eb5-ssh-key-openstack-edpm-ipam\") pod \"d1104797-b1ab-4987-9d02-b19197f94eb5\" (UID: \"d1104797-b1ab-4987-9d02-b19197f94eb5\") " Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.062022 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkr58\" (UniqueName: \"kubernetes.io/projected/d1104797-b1ab-4987-9d02-b19197f94eb5-kube-api-access-rkr58\") pod \"d1104797-b1ab-4987-9d02-b19197f94eb5\" (UID: \"d1104797-b1ab-4987-9d02-b19197f94eb5\") " Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.062207 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d1104797-b1ab-4987-9d02-b19197f94eb5-ceph\") pod \"d1104797-b1ab-4987-9d02-b19197f94eb5\" (UID: \"d1104797-b1ab-4987-9d02-b19197f94eb5\") " Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.062292 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1104797-b1ab-4987-9d02-b19197f94eb5-inventory\") pod \"d1104797-b1ab-4987-9d02-b19197f94eb5\" (UID: \"d1104797-b1ab-4987-9d02-b19197f94eb5\") " Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.126985 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1104797-b1ab-4987-9d02-b19197f94eb5-kube-api-access-rkr58" (OuterVolumeSpecName: "kube-api-access-rkr58") pod "d1104797-b1ab-4987-9d02-b19197f94eb5" (UID: "d1104797-b1ab-4987-9d02-b19197f94eb5"). InnerVolumeSpecName "kube-api-access-rkr58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.128477 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1104797-b1ab-4987-9d02-b19197f94eb5-ceph" (OuterVolumeSpecName: "ceph") pod "d1104797-b1ab-4987-9d02-b19197f94eb5" (UID: "d1104797-b1ab-4987-9d02-b19197f94eb5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.131959 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1104797-b1ab-4987-9d02-b19197f94eb5-inventory" (OuterVolumeSpecName: "inventory") pod "d1104797-b1ab-4987-9d02-b19197f94eb5" (UID: "d1104797-b1ab-4987-9d02-b19197f94eb5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.134278 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1104797-b1ab-4987-9d02-b19197f94eb5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d1104797-b1ab-4987-9d02-b19197f94eb5" (UID: "d1104797-b1ab-4987-9d02-b19197f94eb5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.164843 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1104797-b1ab-4987-9d02-b19197f94eb5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.164873 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkr58\" (UniqueName: \"kubernetes.io/projected/d1104797-b1ab-4987-9d02-b19197f94eb5-kube-api-access-rkr58\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.164900 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d1104797-b1ab-4987-9d02-b19197f94eb5-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.164911 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1104797-b1ab-4987-9d02-b19197f94eb5-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.580850 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9" event={"ID":"d1104797-b1ab-4987-9d02-b19197f94eb5","Type":"ContainerDied","Data":"6846a9686d804fec4d775cd80f17f528d949c50caa64404e4703a36ebc9e116e"} Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.581248 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6846a9686d804fec4d775cd80f17f528d949c50caa64404e4703a36ebc9e116e" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.580905 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.678814 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8"] Jan 31 04:24:27 crc kubenswrapper[4827]: E0131 04:24:27.679214 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4d4928c-5570-43f5-9ba3-6a993570c318" containerName="extract-utilities" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.679233 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4d4928c-5570-43f5-9ba3-6a993570c318" containerName="extract-utilities" Jan 31 04:24:27 crc kubenswrapper[4827]: E0131 04:24:27.679252 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1104797-b1ab-4987-9d02-b19197f94eb5" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.679259 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1104797-b1ab-4987-9d02-b19197f94eb5" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 04:24:27 crc kubenswrapper[4827]: E0131 04:24:27.679271 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4d4928c-5570-43f5-9ba3-6a993570c318" containerName="registry-server" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.679278 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4d4928c-5570-43f5-9ba3-6a993570c318" containerName="registry-server" Jan 31 04:24:27 crc kubenswrapper[4827]: E0131 04:24:27.679292 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0085657-a80e-4434-a05f-635b8c5c6317" containerName="extract-content" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.679301 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0085657-a80e-4434-a05f-635b8c5c6317" containerName="extract-content" Jan 31 04:24:27 crc kubenswrapper[4827]: E0131 04:24:27.679312 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4d4928c-5570-43f5-9ba3-6a993570c318" containerName="extract-content" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.679318 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4d4928c-5570-43f5-9ba3-6a993570c318" containerName="extract-content" Jan 31 04:24:27 crc kubenswrapper[4827]: E0131 04:24:27.679329 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0085657-a80e-4434-a05f-635b8c5c6317" containerName="extract-utilities" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.679334 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0085657-a80e-4434-a05f-635b8c5c6317" containerName="extract-utilities" Jan 31 04:24:27 crc kubenswrapper[4827]: E0131 04:24:27.679353 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0085657-a80e-4434-a05f-635b8c5c6317" containerName="registry-server" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.679360 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0085657-a80e-4434-a05f-635b8c5c6317" containerName="registry-server" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.679519 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1104797-b1ab-4987-9d02-b19197f94eb5" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.679557 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4d4928c-5570-43f5-9ba3-6a993570c318" containerName="registry-server" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.679569 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0085657-a80e-4434-a05f-635b8c5c6317" containerName="registry-server" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.680135 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.683459 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.683558 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.683711 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.684764 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.687980 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.700581 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8"] Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.775227 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8\" (UID: \"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.775644 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt689\" (UniqueName: \"kubernetes.io/projected/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-kube-api-access-wt689\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8\" (UID: \"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.775810 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8\" (UID: \"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.775847 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8\" (UID: \"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.877380 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt689\" (UniqueName: \"kubernetes.io/projected/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-kube-api-access-wt689\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8\" (UID: \"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.877605 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8\" (UID: \"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.877663 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8\" (UID: \"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.877716 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8\" (UID: \"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.886654 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8\" (UID: \"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.887563 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8\" (UID: \"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.891537 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8\" (UID: \"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8" Jan 31 04:24:27 crc kubenswrapper[4827]: I0131 04:24:27.896387 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt689\" (UniqueName: \"kubernetes.io/projected/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-kube-api-access-wt689\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8\" (UID: \"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8" Jan 31 04:24:28 crc kubenswrapper[4827]: I0131 04:24:28.004454 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8" Jan 31 04:24:28 crc kubenswrapper[4827]: I0131 04:24:28.533803 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8"] Jan 31 04:24:28 crc kubenswrapper[4827]: I0131 04:24:28.606194 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8" event={"ID":"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea","Type":"ContainerStarted","Data":"092b69f3231286459e9e49779ab393573f5b63083c01c71ff51a96ff67a2245b"} Jan 31 04:24:29 crc kubenswrapper[4827]: I0131 04:24:29.620127 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8" event={"ID":"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea","Type":"ContainerStarted","Data":"7f3bebcef9ac85ab81fbd41f07bb1fc44ab9b93147f7d90b024c92e56ca2f708"} Jan 31 04:24:29 crc kubenswrapper[4827]: I0131 04:24:29.662505 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8" podStartSLOduration=2.288193626 podStartE2EDuration="2.662477817s" podCreationTimestamp="2026-01-31 04:24:27 +0000 UTC" firstStartedPulling="2026-01-31 04:24:28.544577238 +0000 UTC m=+2261.231657717" lastFinishedPulling="2026-01-31 04:24:28.918861429 +0000 UTC m=+2261.605941908" observedRunningTime="2026-01-31 04:24:29.644935511 +0000 UTC m=+2262.332016040" watchObservedRunningTime="2026-01-31 04:24:29.662477817 +0000 UTC m=+2262.349558306" Jan 31 04:24:33 crc kubenswrapper[4827]: I0131 04:24:33.110371 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:24:33 crc kubenswrapper[4827]: E0131 04:24:33.111222 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:24:47 crc kubenswrapper[4827]: I0131 04:24:47.109903 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:24:47 crc kubenswrapper[4827]: E0131 04:24:47.110714 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:25:02 crc kubenswrapper[4827]: I0131 04:25:02.110553 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:25:02 crc kubenswrapper[4827]: E0131 04:25:02.111351 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:25:13 crc kubenswrapper[4827]: I0131 04:25:13.033316 4827 generic.go:334] "Generic (PLEG): container finished" podID="5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea" containerID="7f3bebcef9ac85ab81fbd41f07bb1fc44ab9b93147f7d90b024c92e56ca2f708" exitCode=0 Jan 31 04:25:13 crc kubenswrapper[4827]: I0131 04:25:13.033417 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8" event={"ID":"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea","Type":"ContainerDied","Data":"7f3bebcef9ac85ab81fbd41f07bb1fc44ab9b93147f7d90b024c92e56ca2f708"} Jan 31 04:25:14 crc kubenswrapper[4827]: I0131 04:25:14.110265 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:25:14 crc kubenswrapper[4827]: E0131 04:25:14.111484 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:25:14 crc kubenswrapper[4827]: I0131 04:25:14.486770 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8" Jan 31 04:25:14 crc kubenswrapper[4827]: I0131 04:25:14.665369 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-ceph\") pod \"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea\" (UID: \"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea\") " Jan 31 04:25:14 crc kubenswrapper[4827]: I0131 04:25:14.665503 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-ssh-key-openstack-edpm-ipam\") pod \"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea\" (UID: \"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea\") " Jan 31 04:25:14 crc kubenswrapper[4827]: I0131 04:25:14.665641 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt689\" (UniqueName: \"kubernetes.io/projected/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-kube-api-access-wt689\") pod \"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea\" (UID: \"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea\") " Jan 31 04:25:14 crc kubenswrapper[4827]: I0131 04:25:14.665805 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-inventory\") pod \"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea\" (UID: \"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea\") " Jan 31 04:25:14 crc kubenswrapper[4827]: I0131 04:25:14.671715 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-kube-api-access-wt689" (OuterVolumeSpecName: "kube-api-access-wt689") pod "5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea" (UID: "5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea"). InnerVolumeSpecName "kube-api-access-wt689". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:25:14 crc kubenswrapper[4827]: I0131 04:25:14.677509 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-ceph" (OuterVolumeSpecName: "ceph") pod "5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea" (UID: "5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:25:14 crc kubenswrapper[4827]: I0131 04:25:14.693301 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-inventory" (OuterVolumeSpecName: "inventory") pod "5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea" (UID: "5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:25:14 crc kubenswrapper[4827]: I0131 04:25:14.697033 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea" (UID: "5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:25:14 crc kubenswrapper[4827]: I0131 04:25:14.768041 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:14 crc kubenswrapper[4827]: I0131 04:25:14.768076 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:14 crc kubenswrapper[4827]: I0131 04:25:14.768086 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:14 crc kubenswrapper[4827]: I0131 04:25:14.768095 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt689\" (UniqueName: \"kubernetes.io/projected/5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea-kube-api-access-wt689\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.055600 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8" event={"ID":"5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea","Type":"ContainerDied","Data":"092b69f3231286459e9e49779ab393573f5b63083c01c71ff51a96ff67a2245b"} Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.055679 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="092b69f3231286459e9e49779ab393573f5b63083c01c71ff51a96ff67a2245b" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.055921 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.177783 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-42k96"] Jan 31 04:25:15 crc kubenswrapper[4827]: E0131 04:25:15.178588 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.178610 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.178814 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.179534 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-42k96" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.183459 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.183592 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.183682 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.183802 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.193473 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.194910 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-42k96"] Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.199433 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtgpg\" (UniqueName: \"kubernetes.io/projected/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-kube-api-access-mtgpg\") pod \"ssh-known-hosts-edpm-deployment-42k96\" (UID: \"0d5f4456-d112-4cf0-ac82-fc6f693b42ae\") " pod="openstack/ssh-known-hosts-edpm-deployment-42k96" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.199514 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-42k96\" (UID: \"0d5f4456-d112-4cf0-ac82-fc6f693b42ae\") " pod="openstack/ssh-known-hosts-edpm-deployment-42k96" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.199755 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-ceph\") pod \"ssh-known-hosts-edpm-deployment-42k96\" (UID: \"0d5f4456-d112-4cf0-ac82-fc6f693b42ae\") " pod="openstack/ssh-known-hosts-edpm-deployment-42k96" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.199834 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-42k96\" (UID: \"0d5f4456-d112-4cf0-ac82-fc6f693b42ae\") " pod="openstack/ssh-known-hosts-edpm-deployment-42k96" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.301517 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-ceph\") pod \"ssh-known-hosts-edpm-deployment-42k96\" (UID: \"0d5f4456-d112-4cf0-ac82-fc6f693b42ae\") " pod="openstack/ssh-known-hosts-edpm-deployment-42k96" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.301613 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-42k96\" (UID: \"0d5f4456-d112-4cf0-ac82-fc6f693b42ae\") " pod="openstack/ssh-known-hosts-edpm-deployment-42k96" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.301649 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtgpg\" (UniqueName: \"kubernetes.io/projected/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-kube-api-access-mtgpg\") pod \"ssh-known-hosts-edpm-deployment-42k96\" (UID: \"0d5f4456-d112-4cf0-ac82-fc6f693b42ae\") " pod="openstack/ssh-known-hosts-edpm-deployment-42k96" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.301683 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-42k96\" (UID: \"0d5f4456-d112-4cf0-ac82-fc6f693b42ae\") " pod="openstack/ssh-known-hosts-edpm-deployment-42k96" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.306418 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-ceph\") pod \"ssh-known-hosts-edpm-deployment-42k96\" (UID: \"0d5f4456-d112-4cf0-ac82-fc6f693b42ae\") " pod="openstack/ssh-known-hosts-edpm-deployment-42k96" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.306563 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-42k96\" (UID: \"0d5f4456-d112-4cf0-ac82-fc6f693b42ae\") " pod="openstack/ssh-known-hosts-edpm-deployment-42k96" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.306946 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-42k96\" (UID: \"0d5f4456-d112-4cf0-ac82-fc6f693b42ae\") " pod="openstack/ssh-known-hosts-edpm-deployment-42k96" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.331570 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtgpg\" (UniqueName: \"kubernetes.io/projected/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-kube-api-access-mtgpg\") pod \"ssh-known-hosts-edpm-deployment-42k96\" (UID: \"0d5f4456-d112-4cf0-ac82-fc6f693b42ae\") " pod="openstack/ssh-known-hosts-edpm-deployment-42k96" Jan 31 04:25:15 crc kubenswrapper[4827]: I0131 04:25:15.510606 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-42k96" Jan 31 04:25:16 crc kubenswrapper[4827]: I0131 04:25:16.020564 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-42k96"] Jan 31 04:25:16 crc kubenswrapper[4827]: I0131 04:25:16.071082 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-42k96" event={"ID":"0d5f4456-d112-4cf0-ac82-fc6f693b42ae","Type":"ContainerStarted","Data":"55ff5dac91105c88a14f300f4a082655ed712567e0447f6fef1fd94fc7df14dc"} Jan 31 04:25:17 crc kubenswrapper[4827]: I0131 04:25:17.079780 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-42k96" event={"ID":"0d5f4456-d112-4cf0-ac82-fc6f693b42ae","Type":"ContainerStarted","Data":"8dc1c868afbc2f4bae26b4ed5e095f67bfe600dac1f21c6b6fe82eb3e2448215"} Jan 31 04:25:17 crc kubenswrapper[4827]: I0131 04:25:17.095193 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-42k96" podStartSLOduration=1.651910543 podStartE2EDuration="2.095176792s" podCreationTimestamp="2026-01-31 04:25:15 +0000 UTC" firstStartedPulling="2026-01-31 04:25:16.027769228 +0000 UTC m=+2308.714849677" lastFinishedPulling="2026-01-31 04:25:16.471035437 +0000 UTC m=+2309.158115926" observedRunningTime="2026-01-31 04:25:17.094751568 +0000 UTC m=+2309.781832037" watchObservedRunningTime="2026-01-31 04:25:17.095176792 +0000 UTC m=+2309.782257241" Jan 31 04:25:26 crc kubenswrapper[4827]: I0131 04:25:26.161384 4827 generic.go:334] "Generic (PLEG): container finished" podID="0d5f4456-d112-4cf0-ac82-fc6f693b42ae" containerID="8dc1c868afbc2f4bae26b4ed5e095f67bfe600dac1f21c6b6fe82eb3e2448215" exitCode=0 Jan 31 04:25:26 crc kubenswrapper[4827]: I0131 04:25:26.161459 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-42k96" event={"ID":"0d5f4456-d112-4cf0-ac82-fc6f693b42ae","Type":"ContainerDied","Data":"8dc1c868afbc2f4bae26b4ed5e095f67bfe600dac1f21c6b6fe82eb3e2448215"} Jan 31 04:25:27 crc kubenswrapper[4827]: I0131 04:25:27.111117 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:25:27 crc kubenswrapper[4827]: E0131 04:25:27.111571 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:25:27 crc kubenswrapper[4827]: I0131 04:25:27.710502 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-42k96" Jan 31 04:25:27 crc kubenswrapper[4827]: I0131 04:25:27.740114 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtgpg\" (UniqueName: \"kubernetes.io/projected/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-kube-api-access-mtgpg\") pod \"0d5f4456-d112-4cf0-ac82-fc6f693b42ae\" (UID: \"0d5f4456-d112-4cf0-ac82-fc6f693b42ae\") " Jan 31 04:25:27 crc kubenswrapper[4827]: I0131 04:25:27.740230 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-ssh-key-openstack-edpm-ipam\") pod \"0d5f4456-d112-4cf0-ac82-fc6f693b42ae\" (UID: \"0d5f4456-d112-4cf0-ac82-fc6f693b42ae\") " Jan 31 04:25:27 crc kubenswrapper[4827]: I0131 04:25:27.740305 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-ceph\") pod \"0d5f4456-d112-4cf0-ac82-fc6f693b42ae\" (UID: \"0d5f4456-d112-4cf0-ac82-fc6f693b42ae\") " Jan 31 04:25:27 crc kubenswrapper[4827]: I0131 04:25:27.740426 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-inventory-0\") pod \"0d5f4456-d112-4cf0-ac82-fc6f693b42ae\" (UID: \"0d5f4456-d112-4cf0-ac82-fc6f693b42ae\") " Jan 31 04:25:27 crc kubenswrapper[4827]: I0131 04:25:27.745941 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-kube-api-access-mtgpg" (OuterVolumeSpecName: "kube-api-access-mtgpg") pod "0d5f4456-d112-4cf0-ac82-fc6f693b42ae" (UID: "0d5f4456-d112-4cf0-ac82-fc6f693b42ae"). InnerVolumeSpecName "kube-api-access-mtgpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:25:27 crc kubenswrapper[4827]: I0131 04:25:27.747018 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-ceph" (OuterVolumeSpecName: "ceph") pod "0d5f4456-d112-4cf0-ac82-fc6f693b42ae" (UID: "0d5f4456-d112-4cf0-ac82-fc6f693b42ae"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:25:27 crc kubenswrapper[4827]: I0131 04:25:27.766148 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0d5f4456-d112-4cf0-ac82-fc6f693b42ae" (UID: "0d5f4456-d112-4cf0-ac82-fc6f693b42ae"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:25:27 crc kubenswrapper[4827]: I0131 04:25:27.772126 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "0d5f4456-d112-4cf0-ac82-fc6f693b42ae" (UID: "0d5f4456-d112-4cf0-ac82-fc6f693b42ae"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:25:27 crc kubenswrapper[4827]: I0131 04:25:27.843629 4827 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:27 crc kubenswrapper[4827]: I0131 04:25:27.843675 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtgpg\" (UniqueName: \"kubernetes.io/projected/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-kube-api-access-mtgpg\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:27 crc kubenswrapper[4827]: I0131 04:25:27.843689 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:27 crc kubenswrapper[4827]: I0131 04:25:27.843702 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0d5f4456-d112-4cf0-ac82-fc6f693b42ae-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.180695 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-42k96" event={"ID":"0d5f4456-d112-4cf0-ac82-fc6f693b42ae","Type":"ContainerDied","Data":"55ff5dac91105c88a14f300f4a082655ed712567e0447f6fef1fd94fc7df14dc"} Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.180978 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55ff5dac91105c88a14f300f4a082655ed712567e0447f6fef1fd94fc7df14dc" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.181034 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-42k96" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.255684 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78"] Jan 31 04:25:28 crc kubenswrapper[4827]: E0131 04:25:28.256272 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d5f4456-d112-4cf0-ac82-fc6f693b42ae" containerName="ssh-known-hosts-edpm-deployment" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.256395 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d5f4456-d112-4cf0-ac82-fc6f693b42ae" containerName="ssh-known-hosts-edpm-deployment" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.256693 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d5f4456-d112-4cf0-ac82-fc6f693b42ae" containerName="ssh-known-hosts-edpm-deployment" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.257522 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.263457 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.264016 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.264293 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.264513 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.268069 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.275834 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78"] Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.351987 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c9c5b12-150d-4448-9381-55de889ae8c4-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9tz78\" (UID: \"9c9c5b12-150d-4448-9381-55de889ae8c4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.352045 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c9c5b12-150d-4448-9381-55de889ae8c4-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9tz78\" (UID: \"9c9c5b12-150d-4448-9381-55de889ae8c4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.352111 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f75l4\" (UniqueName: \"kubernetes.io/projected/9c9c5b12-150d-4448-9381-55de889ae8c4-kube-api-access-f75l4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9tz78\" (UID: \"9c9c5b12-150d-4448-9381-55de889ae8c4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.352415 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c9c5b12-150d-4448-9381-55de889ae8c4-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9tz78\" (UID: \"9c9c5b12-150d-4448-9381-55de889ae8c4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.453896 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c9c5b12-150d-4448-9381-55de889ae8c4-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9tz78\" (UID: \"9c9c5b12-150d-4448-9381-55de889ae8c4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.453945 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c9c5b12-150d-4448-9381-55de889ae8c4-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9tz78\" (UID: \"9c9c5b12-150d-4448-9381-55de889ae8c4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.453984 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c9c5b12-150d-4448-9381-55de889ae8c4-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9tz78\" (UID: \"9c9c5b12-150d-4448-9381-55de889ae8c4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.454041 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f75l4\" (UniqueName: \"kubernetes.io/projected/9c9c5b12-150d-4448-9381-55de889ae8c4-kube-api-access-f75l4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9tz78\" (UID: \"9c9c5b12-150d-4448-9381-55de889ae8c4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.458402 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c9c5b12-150d-4448-9381-55de889ae8c4-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9tz78\" (UID: \"9c9c5b12-150d-4448-9381-55de889ae8c4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.459303 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c9c5b12-150d-4448-9381-55de889ae8c4-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9tz78\" (UID: \"9c9c5b12-150d-4448-9381-55de889ae8c4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.460319 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c9c5b12-150d-4448-9381-55de889ae8c4-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9tz78\" (UID: \"9c9c5b12-150d-4448-9381-55de889ae8c4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.481252 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f75l4\" (UniqueName: \"kubernetes.io/projected/9c9c5b12-150d-4448-9381-55de889ae8c4-kube-api-access-f75l4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9tz78\" (UID: \"9c9c5b12-150d-4448-9381-55de889ae8c4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78" Jan 31 04:25:28 crc kubenswrapper[4827]: I0131 04:25:28.583064 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78" Jan 31 04:25:29 crc kubenswrapper[4827]: I0131 04:25:29.123080 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78"] Jan 31 04:25:29 crc kubenswrapper[4827]: I0131 04:25:29.188789 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78" event={"ID":"9c9c5b12-150d-4448-9381-55de889ae8c4","Type":"ContainerStarted","Data":"0a60e544cdba2b4bcf7c41c1017de8af1e5bf883418492e61bdfa0a9a16046d6"} Jan 31 04:25:30 crc kubenswrapper[4827]: I0131 04:25:30.197994 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78" event={"ID":"9c9c5b12-150d-4448-9381-55de889ae8c4","Type":"ContainerStarted","Data":"14f0082d7362229e6e3bf8054c9a8ac96d172a565b7a4bc3ef8ec5050d8c9b95"} Jan 31 04:25:30 crc kubenswrapper[4827]: I0131 04:25:30.228866 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78" podStartSLOduration=1.781655301 podStartE2EDuration="2.228848963s" podCreationTimestamp="2026-01-31 04:25:28 +0000 UTC" firstStartedPulling="2026-01-31 04:25:29.140712868 +0000 UTC m=+2321.827793317" lastFinishedPulling="2026-01-31 04:25:29.58790653 +0000 UTC m=+2322.274986979" observedRunningTime="2026-01-31 04:25:30.220539692 +0000 UTC m=+2322.907620161" watchObservedRunningTime="2026-01-31 04:25:30.228848963 +0000 UTC m=+2322.915929412" Jan 31 04:25:37 crc kubenswrapper[4827]: I0131 04:25:37.261099 4827 generic.go:334] "Generic (PLEG): container finished" podID="9c9c5b12-150d-4448-9381-55de889ae8c4" containerID="14f0082d7362229e6e3bf8054c9a8ac96d172a565b7a4bc3ef8ec5050d8c9b95" exitCode=0 Jan 31 04:25:37 crc kubenswrapper[4827]: I0131 04:25:37.261167 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78" event={"ID":"9c9c5b12-150d-4448-9381-55de889ae8c4","Type":"ContainerDied","Data":"14f0082d7362229e6e3bf8054c9a8ac96d172a565b7a4bc3ef8ec5050d8c9b95"} Jan 31 04:25:38 crc kubenswrapper[4827]: I0131 04:25:38.727137 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78" Jan 31 04:25:38 crc kubenswrapper[4827]: I0131 04:25:38.869135 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f75l4\" (UniqueName: \"kubernetes.io/projected/9c9c5b12-150d-4448-9381-55de889ae8c4-kube-api-access-f75l4\") pod \"9c9c5b12-150d-4448-9381-55de889ae8c4\" (UID: \"9c9c5b12-150d-4448-9381-55de889ae8c4\") " Jan 31 04:25:38 crc kubenswrapper[4827]: I0131 04:25:38.869257 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c9c5b12-150d-4448-9381-55de889ae8c4-ceph\") pod \"9c9c5b12-150d-4448-9381-55de889ae8c4\" (UID: \"9c9c5b12-150d-4448-9381-55de889ae8c4\") " Jan 31 04:25:38 crc kubenswrapper[4827]: I0131 04:25:38.869504 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c9c5b12-150d-4448-9381-55de889ae8c4-inventory\") pod \"9c9c5b12-150d-4448-9381-55de889ae8c4\" (UID: \"9c9c5b12-150d-4448-9381-55de889ae8c4\") " Jan 31 04:25:38 crc kubenswrapper[4827]: I0131 04:25:38.869593 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c9c5b12-150d-4448-9381-55de889ae8c4-ssh-key-openstack-edpm-ipam\") pod \"9c9c5b12-150d-4448-9381-55de889ae8c4\" (UID: \"9c9c5b12-150d-4448-9381-55de889ae8c4\") " Jan 31 04:25:38 crc kubenswrapper[4827]: I0131 04:25:38.876950 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c9c5b12-150d-4448-9381-55de889ae8c4-kube-api-access-f75l4" (OuterVolumeSpecName: "kube-api-access-f75l4") pod "9c9c5b12-150d-4448-9381-55de889ae8c4" (UID: "9c9c5b12-150d-4448-9381-55de889ae8c4"). InnerVolumeSpecName "kube-api-access-f75l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:25:38 crc kubenswrapper[4827]: I0131 04:25:38.879828 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f75l4\" (UniqueName: \"kubernetes.io/projected/9c9c5b12-150d-4448-9381-55de889ae8c4-kube-api-access-f75l4\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:38 crc kubenswrapper[4827]: I0131 04:25:38.879987 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c9c5b12-150d-4448-9381-55de889ae8c4-ceph" (OuterVolumeSpecName: "ceph") pod "9c9c5b12-150d-4448-9381-55de889ae8c4" (UID: "9c9c5b12-150d-4448-9381-55de889ae8c4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:25:38 crc kubenswrapper[4827]: I0131 04:25:38.912416 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c9c5b12-150d-4448-9381-55de889ae8c4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9c9c5b12-150d-4448-9381-55de889ae8c4" (UID: "9c9c5b12-150d-4448-9381-55de889ae8c4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:25:38 crc kubenswrapper[4827]: I0131 04:25:38.917782 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c9c5b12-150d-4448-9381-55de889ae8c4-inventory" (OuterVolumeSpecName: "inventory") pod "9c9c5b12-150d-4448-9381-55de889ae8c4" (UID: "9c9c5b12-150d-4448-9381-55de889ae8c4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:25:38 crc kubenswrapper[4827]: I0131 04:25:38.981522 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c9c5b12-150d-4448-9381-55de889ae8c4-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:38 crc kubenswrapper[4827]: I0131 04:25:38.981571 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c9c5b12-150d-4448-9381-55de889ae8c4-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:38 crc kubenswrapper[4827]: I0131 04:25:38.981596 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c9c5b12-150d-4448-9381-55de889ae8c4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.290160 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78" event={"ID":"9c9c5b12-150d-4448-9381-55de889ae8c4","Type":"ContainerDied","Data":"0a60e544cdba2b4bcf7c41c1017de8af1e5bf883418492e61bdfa0a9a16046d6"} Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.290251 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a60e544cdba2b4bcf7c41c1017de8af1e5bf883418492e61bdfa0a9a16046d6" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.290267 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9tz78" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.436423 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv"] Jan 31 04:25:39 crc kubenswrapper[4827]: E0131 04:25:39.437435 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c9c5b12-150d-4448-9381-55de889ae8c4" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.437457 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c9c5b12-150d-4448-9381-55de889ae8c4" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.437652 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c9c5b12-150d-4448-9381-55de889ae8c4" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.438257 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.442234 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.442462 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.443582 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.443598 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.443818 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.457126 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv"] Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.494855 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65c68493-a927-4bb7-b013-664e9ae73443-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv\" (UID: \"65c68493-a927-4bb7-b013-664e9ae73443\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.495079 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/65c68493-a927-4bb7-b013-664e9ae73443-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv\" (UID: \"65c68493-a927-4bb7-b013-664e9ae73443\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.495184 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65c68493-a927-4bb7-b013-664e9ae73443-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv\" (UID: \"65c68493-a927-4bb7-b013-664e9ae73443\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.495212 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dllrf\" (UniqueName: \"kubernetes.io/projected/65c68493-a927-4bb7-b013-664e9ae73443-kube-api-access-dllrf\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv\" (UID: \"65c68493-a927-4bb7-b013-664e9ae73443\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.597546 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65c68493-a927-4bb7-b013-664e9ae73443-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv\" (UID: \"65c68493-a927-4bb7-b013-664e9ae73443\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.597607 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dllrf\" (UniqueName: \"kubernetes.io/projected/65c68493-a927-4bb7-b013-664e9ae73443-kube-api-access-dllrf\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv\" (UID: \"65c68493-a927-4bb7-b013-664e9ae73443\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.597753 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65c68493-a927-4bb7-b013-664e9ae73443-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv\" (UID: \"65c68493-a927-4bb7-b013-664e9ae73443\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.597844 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/65c68493-a927-4bb7-b013-664e9ae73443-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv\" (UID: \"65c68493-a927-4bb7-b013-664e9ae73443\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.603452 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65c68493-a927-4bb7-b013-664e9ae73443-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv\" (UID: \"65c68493-a927-4bb7-b013-664e9ae73443\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.603875 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/65c68493-a927-4bb7-b013-664e9ae73443-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv\" (UID: \"65c68493-a927-4bb7-b013-664e9ae73443\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.606200 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65c68493-a927-4bb7-b013-664e9ae73443-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv\" (UID: \"65c68493-a927-4bb7-b013-664e9ae73443\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.618592 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dllrf\" (UniqueName: \"kubernetes.io/projected/65c68493-a927-4bb7-b013-664e9ae73443-kube-api-access-dllrf\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv\" (UID: \"65c68493-a927-4bb7-b013-664e9ae73443\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv" Jan 31 04:25:39 crc kubenswrapper[4827]: I0131 04:25:39.779953 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv" Jan 31 04:25:40 crc kubenswrapper[4827]: I0131 04:25:40.111673 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:25:40 crc kubenswrapper[4827]: E0131 04:25:40.113035 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:25:40 crc kubenswrapper[4827]: I0131 04:25:40.198167 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv"] Jan 31 04:25:40 crc kubenswrapper[4827]: I0131 04:25:40.302387 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv" event={"ID":"65c68493-a927-4bb7-b013-664e9ae73443","Type":"ContainerStarted","Data":"7c7dc105d2b6fef6bedfda6e4763416d89a698ea47fff51e925d4ba38fb17961"} Jan 31 04:25:41 crc kubenswrapper[4827]: I0131 04:25:41.315778 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv" event={"ID":"65c68493-a927-4bb7-b013-664e9ae73443","Type":"ContainerStarted","Data":"3963c13aaee4f773f0e5137224a9e147ac232b20163de3f308ad91175ff071ec"} Jan 31 04:25:41 crc kubenswrapper[4827]: I0131 04:25:41.345070 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv" podStartSLOduration=1.906100542 podStartE2EDuration="2.345051796s" podCreationTimestamp="2026-01-31 04:25:39 +0000 UTC" firstStartedPulling="2026-01-31 04:25:40.209043479 +0000 UTC m=+2332.896123938" lastFinishedPulling="2026-01-31 04:25:40.647994703 +0000 UTC m=+2333.335075192" observedRunningTime="2026-01-31 04:25:41.341965919 +0000 UTC m=+2334.029046448" watchObservedRunningTime="2026-01-31 04:25:41.345051796 +0000 UTC m=+2334.032132245" Jan 31 04:25:51 crc kubenswrapper[4827]: I0131 04:25:51.405321 4827 generic.go:334] "Generic (PLEG): container finished" podID="65c68493-a927-4bb7-b013-664e9ae73443" containerID="3963c13aaee4f773f0e5137224a9e147ac232b20163de3f308ad91175ff071ec" exitCode=0 Jan 31 04:25:51 crc kubenswrapper[4827]: I0131 04:25:51.405587 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv" event={"ID":"65c68493-a927-4bb7-b013-664e9ae73443","Type":"ContainerDied","Data":"3963c13aaee4f773f0e5137224a9e147ac232b20163de3f308ad91175ff071ec"} Jan 31 04:25:52 crc kubenswrapper[4827]: I0131 04:25:52.110413 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:25:52 crc kubenswrapper[4827]: E0131 04:25:52.110834 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:25:52 crc kubenswrapper[4827]: I0131 04:25:52.858852 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.537687 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv" event={"ID":"65c68493-a927-4bb7-b013-664e9ae73443","Type":"ContainerDied","Data":"7c7dc105d2b6fef6bedfda6e4763416d89a698ea47fff51e925d4ba38fb17961"} Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.538132 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c7dc105d2b6fef6bedfda6e4763416d89a698ea47fff51e925d4ba38fb17961" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.538199 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.558474 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65c68493-a927-4bb7-b013-664e9ae73443-ssh-key-openstack-edpm-ipam\") pod \"65c68493-a927-4bb7-b013-664e9ae73443\" (UID: \"65c68493-a927-4bb7-b013-664e9ae73443\") " Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.558528 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/65c68493-a927-4bb7-b013-664e9ae73443-ceph\") pod \"65c68493-a927-4bb7-b013-664e9ae73443\" (UID: \"65c68493-a927-4bb7-b013-664e9ae73443\") " Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.558568 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dllrf\" (UniqueName: \"kubernetes.io/projected/65c68493-a927-4bb7-b013-664e9ae73443-kube-api-access-dllrf\") pod \"65c68493-a927-4bb7-b013-664e9ae73443\" (UID: \"65c68493-a927-4bb7-b013-664e9ae73443\") " Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.558594 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65c68493-a927-4bb7-b013-664e9ae73443-inventory\") pod \"65c68493-a927-4bb7-b013-664e9ae73443\" (UID: \"65c68493-a927-4bb7-b013-664e9ae73443\") " Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.576321 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65c68493-a927-4bb7-b013-664e9ae73443-kube-api-access-dllrf" (OuterVolumeSpecName: "kube-api-access-dllrf") pod "65c68493-a927-4bb7-b013-664e9ae73443" (UID: "65c68493-a927-4bb7-b013-664e9ae73443"). InnerVolumeSpecName "kube-api-access-dllrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.576437 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c68493-a927-4bb7-b013-664e9ae73443-ceph" (OuterVolumeSpecName: "ceph") pod "65c68493-a927-4bb7-b013-664e9ae73443" (UID: "65c68493-a927-4bb7-b013-664e9ae73443"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.601472 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq"] Jan 31 04:25:53 crc kubenswrapper[4827]: E0131 04:25:53.601967 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c68493-a927-4bb7-b013-664e9ae73443" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.601998 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c68493-a927-4bb7-b013-664e9ae73443" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.602295 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="65c68493-a927-4bb7-b013-664e9ae73443" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.603231 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.610181 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq"] Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.651220 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c68493-a927-4bb7-b013-664e9ae73443-inventory" (OuterVolumeSpecName: "inventory") pod "65c68493-a927-4bb7-b013-664e9ae73443" (UID: "65c68493-a927-4bb7-b013-664e9ae73443"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.655044 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.655240 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.655375 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.655594 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c68493-a927-4bb7-b013-664e9ae73443-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "65c68493-a927-4bb7-b013-664e9ae73443" (UID: "65c68493-a927-4bb7-b013-664e9ae73443"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.662338 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.662385 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.662431 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.662462 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.662502 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.662546 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.662576 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.662633 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.662671 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.662709 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78mlc\" (UniqueName: \"kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-kube-api-access-78mlc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.662786 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.662843 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.662945 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.663011 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65c68493-a927-4bb7-b013-664e9ae73443-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.663031 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/65c68493-a927-4bb7-b013-664e9ae73443-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.663045 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dllrf\" (UniqueName: \"kubernetes.io/projected/65c68493-a927-4bb7-b013-664e9ae73443-kube-api-access-dllrf\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.663056 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65c68493-a927-4bb7-b013-664e9ae73443-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.764143 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.764214 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.764252 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.764271 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.764299 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.764318 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.764347 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.764375 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.764397 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.764426 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.764453 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.764481 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78mlc\" (UniqueName: \"kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-kube-api-access-78mlc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.764527 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.768268 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.768413 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.768488 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.769103 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.769399 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.770236 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.770377 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.770498 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.770919 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.772521 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.772587 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.773295 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:53 crc kubenswrapper[4827]: I0131 04:25:53.783848 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78mlc\" (UniqueName: \"kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-kube-api-access-78mlc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-967vq\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:54 crc kubenswrapper[4827]: I0131 04:25:54.004470 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:25:54 crc kubenswrapper[4827]: I0131 04:25:54.523804 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq"] Jan 31 04:25:54 crc kubenswrapper[4827]: I0131 04:25:54.533288 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:25:54 crc kubenswrapper[4827]: I0131 04:25:54.545150 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" event={"ID":"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080","Type":"ContainerStarted","Data":"c3eff1400e1dede9eeabb5482080c591721b057a2577019e883810cb62444925"} Jan 31 04:25:55 crc kubenswrapper[4827]: I0131 04:25:55.556212 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" event={"ID":"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080","Type":"ContainerStarted","Data":"99a7efd3251af9ab739a25061a3d39148dd26d6626734c640a8c5e6020685361"} Jan 31 04:25:55 crc kubenswrapper[4827]: I0131 04:25:55.578925 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" podStartSLOduration=2.089086009 podStartE2EDuration="2.578906739s" podCreationTimestamp="2026-01-31 04:25:53 +0000 UTC" firstStartedPulling="2026-01-31 04:25:54.533111603 +0000 UTC m=+2347.220192052" lastFinishedPulling="2026-01-31 04:25:55.022932323 +0000 UTC m=+2347.710012782" observedRunningTime="2026-01-31 04:25:55.576377639 +0000 UTC m=+2348.263458108" watchObservedRunningTime="2026-01-31 04:25:55.578906739 +0000 UTC m=+2348.265987178" Jan 31 04:26:07 crc kubenswrapper[4827]: I0131 04:26:07.111014 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:26:07 crc kubenswrapper[4827]: E0131 04:26:07.114076 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:26:19 crc kubenswrapper[4827]: I0131 04:26:19.110985 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:26:19 crc kubenswrapper[4827]: E0131 04:26:19.112082 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:26:26 crc kubenswrapper[4827]: I0131 04:26:26.817217 4827 generic.go:334] "Generic (PLEG): container finished" podID="d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080" containerID="99a7efd3251af9ab739a25061a3d39148dd26d6626734c640a8c5e6020685361" exitCode=0 Jan 31 04:26:26 crc kubenswrapper[4827]: I0131 04:26:26.817318 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" event={"ID":"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080","Type":"ContainerDied","Data":"99a7efd3251af9ab739a25061a3d39148dd26d6626734c640a8c5e6020685361"} Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.302133 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.338866 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78mlc\" (UniqueName: \"kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-kube-api-access-78mlc\") pod \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.338961 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-ssh-key-openstack-edpm-ipam\") pod \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.338997 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-bootstrap-combined-ca-bundle\") pod \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.339054 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-ceph\") pod \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.339102 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-neutron-metadata-combined-ca-bundle\") pod \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.339144 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-openstack-edpm-ipam-ovn-default-certs-0\") pod \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.339174 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-repo-setup-combined-ca-bundle\") pod \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.339206 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.339240 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.339335 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-libvirt-combined-ca-bundle\") pod \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.339397 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-ovn-combined-ca-bundle\") pod \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.339426 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-inventory\") pod \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.339538 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-nova-combined-ca-bundle\") pod \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\" (UID: \"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080\") " Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.394072 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080" (UID: "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.394466 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080" (UID: "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.394618 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080" (UID: "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.395154 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080" (UID: "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.397197 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080" (UID: "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.397300 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-ceph" (OuterVolumeSpecName: "ceph") pod "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080" (UID: "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.400553 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080" (UID: "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.402721 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080" (UID: "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.403193 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080" (UID: "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.406286 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080" (UID: "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.407222 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-kube-api-access-78mlc" (OuterVolumeSpecName: "kube-api-access-78mlc") pod "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080" (UID: "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080"). InnerVolumeSpecName "kube-api-access-78mlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.413171 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080" (UID: "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.444053 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78mlc\" (UniqueName: \"kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-kube-api-access-78mlc\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.444095 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.444107 4827 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.444115 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.444125 4827 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.444136 4827 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.444145 4827 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.444155 4827 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.444166 4827 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.444176 4827 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.444185 4827 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.444194 4827 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.509041 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-inventory" (OuterVolumeSpecName: "inventory") pod "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080" (UID: "d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.545578 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.838905 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" event={"ID":"d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080","Type":"ContainerDied","Data":"c3eff1400e1dede9eeabb5482080c591721b057a2577019e883810cb62444925"} Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.838954 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3eff1400e1dede9eeabb5482080c591721b057a2577019e883810cb62444925" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.839033 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-967vq" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.954583 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb"] Jan 31 04:26:28 crc kubenswrapper[4827]: E0131 04:26:28.955200 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.955223 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.955409 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.956071 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.966373 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.966699 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.967052 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.967237 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.967290 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:26:28 crc kubenswrapper[4827]: I0131 04:26:28.979165 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb"] Jan 31 04:26:29 crc kubenswrapper[4827]: I0131 04:26:29.057145 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8b7f56f-cdfd-483b-8759-e869bedfd461-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb\" (UID: \"e8b7f56f-cdfd-483b-8759-e869bedfd461\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb" Jan 31 04:26:29 crc kubenswrapper[4827]: I0131 04:26:29.057391 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptvxv\" (UniqueName: \"kubernetes.io/projected/e8b7f56f-cdfd-483b-8759-e869bedfd461-kube-api-access-ptvxv\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb\" (UID: \"e8b7f56f-cdfd-483b-8759-e869bedfd461\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb" Jan 31 04:26:29 crc kubenswrapper[4827]: I0131 04:26:29.057432 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8b7f56f-cdfd-483b-8759-e869bedfd461-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb\" (UID: \"e8b7f56f-cdfd-483b-8759-e869bedfd461\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb" Jan 31 04:26:29 crc kubenswrapper[4827]: I0131 04:26:29.057543 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e8b7f56f-cdfd-483b-8759-e869bedfd461-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb\" (UID: \"e8b7f56f-cdfd-483b-8759-e869bedfd461\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb" Jan 31 04:26:29 crc kubenswrapper[4827]: I0131 04:26:29.159626 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e8b7f56f-cdfd-483b-8759-e869bedfd461-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb\" (UID: \"e8b7f56f-cdfd-483b-8759-e869bedfd461\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb" Jan 31 04:26:29 crc kubenswrapper[4827]: I0131 04:26:29.159706 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8b7f56f-cdfd-483b-8759-e869bedfd461-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb\" (UID: \"e8b7f56f-cdfd-483b-8759-e869bedfd461\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb" Jan 31 04:26:29 crc kubenswrapper[4827]: I0131 04:26:29.159800 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptvxv\" (UniqueName: \"kubernetes.io/projected/e8b7f56f-cdfd-483b-8759-e869bedfd461-kube-api-access-ptvxv\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb\" (UID: \"e8b7f56f-cdfd-483b-8759-e869bedfd461\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb" Jan 31 04:26:29 crc kubenswrapper[4827]: I0131 04:26:29.159824 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8b7f56f-cdfd-483b-8759-e869bedfd461-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb\" (UID: \"e8b7f56f-cdfd-483b-8759-e869bedfd461\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb" Jan 31 04:26:29 crc kubenswrapper[4827]: I0131 04:26:29.163808 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e8b7f56f-cdfd-483b-8759-e869bedfd461-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb\" (UID: \"e8b7f56f-cdfd-483b-8759-e869bedfd461\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb" Jan 31 04:26:29 crc kubenswrapper[4827]: I0131 04:26:29.164035 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8b7f56f-cdfd-483b-8759-e869bedfd461-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb\" (UID: \"e8b7f56f-cdfd-483b-8759-e869bedfd461\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb" Jan 31 04:26:29 crc kubenswrapper[4827]: I0131 04:26:29.164550 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8b7f56f-cdfd-483b-8759-e869bedfd461-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb\" (UID: \"e8b7f56f-cdfd-483b-8759-e869bedfd461\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb" Jan 31 04:26:29 crc kubenswrapper[4827]: I0131 04:26:29.181526 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptvxv\" (UniqueName: \"kubernetes.io/projected/e8b7f56f-cdfd-483b-8759-e869bedfd461-kube-api-access-ptvxv\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb\" (UID: \"e8b7f56f-cdfd-483b-8759-e869bedfd461\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb" Jan 31 04:26:29 crc kubenswrapper[4827]: I0131 04:26:29.275049 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb" Jan 31 04:26:29 crc kubenswrapper[4827]: I0131 04:26:29.870260 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb"] Jan 31 04:26:30 crc kubenswrapper[4827]: I0131 04:26:30.864787 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb" event={"ID":"e8b7f56f-cdfd-483b-8759-e869bedfd461","Type":"ContainerStarted","Data":"555895f89483669fb75c92497557530b02dacc7724e28f39b525f44913b0e4f1"} Jan 31 04:26:30 crc kubenswrapper[4827]: I0131 04:26:30.865457 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb" event={"ID":"e8b7f56f-cdfd-483b-8759-e869bedfd461","Type":"ContainerStarted","Data":"f683f82e2f4f3cec181f13dcb1f27c4ae166cb0a77459f72170185421e79969b"} Jan 31 04:26:30 crc kubenswrapper[4827]: I0131 04:26:30.897298 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb" podStartSLOduration=2.318283871 podStartE2EDuration="2.897272979s" podCreationTimestamp="2026-01-31 04:26:28 +0000 UTC" firstStartedPulling="2026-01-31 04:26:29.87177848 +0000 UTC m=+2382.558858939" lastFinishedPulling="2026-01-31 04:26:30.450767558 +0000 UTC m=+2383.137848047" observedRunningTime="2026-01-31 04:26:30.887142572 +0000 UTC m=+2383.574223171" watchObservedRunningTime="2026-01-31 04:26:30.897272979 +0000 UTC m=+2383.584353468" Jan 31 04:26:34 crc kubenswrapper[4827]: I0131 04:26:34.109914 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:26:34 crc kubenswrapper[4827]: E0131 04:26:34.110632 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:26:35 crc kubenswrapper[4827]: I0131 04:26:35.918459 4827 generic.go:334] "Generic (PLEG): container finished" podID="e8b7f56f-cdfd-483b-8759-e869bedfd461" containerID="555895f89483669fb75c92497557530b02dacc7724e28f39b525f44913b0e4f1" exitCode=0 Jan 31 04:26:35 crc kubenswrapper[4827]: I0131 04:26:35.918559 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb" event={"ID":"e8b7f56f-cdfd-483b-8759-e869bedfd461","Type":"ContainerDied","Data":"555895f89483669fb75c92497557530b02dacc7724e28f39b525f44913b0e4f1"} Jan 31 04:26:37 crc kubenswrapper[4827]: I0131 04:26:37.307952 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb" Jan 31 04:26:37 crc kubenswrapper[4827]: I0131 04:26:37.428221 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8b7f56f-cdfd-483b-8759-e869bedfd461-inventory\") pod \"e8b7f56f-cdfd-483b-8759-e869bedfd461\" (UID: \"e8b7f56f-cdfd-483b-8759-e869bedfd461\") " Jan 31 04:26:37 crc kubenswrapper[4827]: I0131 04:26:37.428509 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e8b7f56f-cdfd-483b-8759-e869bedfd461-ceph\") pod \"e8b7f56f-cdfd-483b-8759-e869bedfd461\" (UID: \"e8b7f56f-cdfd-483b-8759-e869bedfd461\") " Jan 31 04:26:37 crc kubenswrapper[4827]: I0131 04:26:37.428657 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8b7f56f-cdfd-483b-8759-e869bedfd461-ssh-key-openstack-edpm-ipam\") pod \"e8b7f56f-cdfd-483b-8759-e869bedfd461\" (UID: \"e8b7f56f-cdfd-483b-8759-e869bedfd461\") " Jan 31 04:26:37 crc kubenswrapper[4827]: I0131 04:26:37.428706 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptvxv\" (UniqueName: \"kubernetes.io/projected/e8b7f56f-cdfd-483b-8759-e869bedfd461-kube-api-access-ptvxv\") pod \"e8b7f56f-cdfd-483b-8759-e869bedfd461\" (UID: \"e8b7f56f-cdfd-483b-8759-e869bedfd461\") " Jan 31 04:26:37 crc kubenswrapper[4827]: I0131 04:26:37.433899 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8b7f56f-cdfd-483b-8759-e869bedfd461-kube-api-access-ptvxv" (OuterVolumeSpecName: "kube-api-access-ptvxv") pod "e8b7f56f-cdfd-483b-8759-e869bedfd461" (UID: "e8b7f56f-cdfd-483b-8759-e869bedfd461"). InnerVolumeSpecName "kube-api-access-ptvxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:26:37 crc kubenswrapper[4827]: I0131 04:26:37.435697 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b7f56f-cdfd-483b-8759-e869bedfd461-ceph" (OuterVolumeSpecName: "ceph") pod "e8b7f56f-cdfd-483b-8759-e869bedfd461" (UID: "e8b7f56f-cdfd-483b-8759-e869bedfd461"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:26:37 crc kubenswrapper[4827]: I0131 04:26:37.466625 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b7f56f-cdfd-483b-8759-e869bedfd461-inventory" (OuterVolumeSpecName: "inventory") pod "e8b7f56f-cdfd-483b-8759-e869bedfd461" (UID: "e8b7f56f-cdfd-483b-8759-e869bedfd461"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:26:37 crc kubenswrapper[4827]: I0131 04:26:37.469437 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b7f56f-cdfd-483b-8759-e869bedfd461-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e8b7f56f-cdfd-483b-8759-e869bedfd461" (UID: "e8b7f56f-cdfd-483b-8759-e869bedfd461"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:26:37 crc kubenswrapper[4827]: I0131 04:26:37.531537 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8b7f56f-cdfd-483b-8759-e869bedfd461-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:37 crc kubenswrapper[4827]: I0131 04:26:37.531608 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptvxv\" (UniqueName: \"kubernetes.io/projected/e8b7f56f-cdfd-483b-8759-e869bedfd461-kube-api-access-ptvxv\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:37 crc kubenswrapper[4827]: I0131 04:26:37.531626 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8b7f56f-cdfd-483b-8759-e869bedfd461-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:37 crc kubenswrapper[4827]: I0131 04:26:37.531644 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e8b7f56f-cdfd-483b-8759-e869bedfd461-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:37 crc kubenswrapper[4827]: I0131 04:26:37.940141 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb" event={"ID":"e8b7f56f-cdfd-483b-8759-e869bedfd461","Type":"ContainerDied","Data":"f683f82e2f4f3cec181f13dcb1f27c4ae166cb0a77459f72170185421e79969b"} Jan 31 04:26:37 crc kubenswrapper[4827]: I0131 04:26:37.940185 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f683f82e2f4f3cec181f13dcb1f27c4ae166cb0a77459f72170185421e79969b" Jan 31 04:26:37 crc kubenswrapper[4827]: I0131 04:26:37.940193 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.037805 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv"] Jan 31 04:26:38 crc kubenswrapper[4827]: E0131 04:26:38.038139 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b7f56f-cdfd-483b-8759-e869bedfd461" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.038155 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b7f56f-cdfd-483b-8759-e869bedfd461" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.038328 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8b7f56f-cdfd-483b-8759-e869bedfd461" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.038856 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.040562 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.040723 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.041029 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.041581 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.041753 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.041838 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.056849 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv"] Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.140003 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52fdv\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.140110 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqhx6\" (UniqueName: \"kubernetes.io/projected/d2764df8-6296-4363-9a0a-bad8253a8942-kube-api-access-cqhx6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52fdv\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.140145 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52fdv\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.140207 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52fdv\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.140231 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d2764df8-6296-4363-9a0a-bad8253a8942-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52fdv\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.140255 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52fdv\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.242169 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52fdv\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.242224 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d2764df8-6296-4363-9a0a-bad8253a8942-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52fdv\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.242267 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52fdv\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.242296 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52fdv\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.242411 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqhx6\" (UniqueName: \"kubernetes.io/projected/d2764df8-6296-4363-9a0a-bad8253a8942-kube-api-access-cqhx6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52fdv\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.242439 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52fdv\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.245271 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d2764df8-6296-4363-9a0a-bad8253a8942-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52fdv\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.246239 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52fdv\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.247503 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52fdv\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.248417 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52fdv\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.249771 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52fdv\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.273973 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqhx6\" (UniqueName: \"kubernetes.io/projected/d2764df8-6296-4363-9a0a-bad8253a8942-kube-api-access-cqhx6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52fdv\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.365754 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.698053 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv"] Jan 31 04:26:38 crc kubenswrapper[4827]: I0131 04:26:38.948385 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" event={"ID":"d2764df8-6296-4363-9a0a-bad8253a8942","Type":"ContainerStarted","Data":"42c2220a10139c7afde1c0f5a021ca45f7623ef10bb4b5438128c064bef84d0b"} Jan 31 04:26:39 crc kubenswrapper[4827]: I0131 04:26:39.963212 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" event={"ID":"d2764df8-6296-4363-9a0a-bad8253a8942","Type":"ContainerStarted","Data":"023cfa3eb1465bcb428159b7cf8f71d6d9606397c059987df24bf85359ae41bd"} Jan 31 04:26:40 crc kubenswrapper[4827]: I0131 04:26:40.008968 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" podStartSLOduration=1.5952780519999998 podStartE2EDuration="2.008949142s" podCreationTimestamp="2026-01-31 04:26:38 +0000 UTC" firstStartedPulling="2026-01-31 04:26:38.699039129 +0000 UTC m=+2391.386119578" lastFinishedPulling="2026-01-31 04:26:39.112710219 +0000 UTC m=+2391.799790668" observedRunningTime="2026-01-31 04:26:40.002254543 +0000 UTC m=+2392.689334992" watchObservedRunningTime="2026-01-31 04:26:40.008949142 +0000 UTC m=+2392.696029591" Jan 31 04:26:47 crc kubenswrapper[4827]: I0131 04:26:47.110418 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:26:47 crc kubenswrapper[4827]: E0131 04:26:47.111351 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:27:01 crc kubenswrapper[4827]: I0131 04:27:01.110162 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:27:01 crc kubenswrapper[4827]: E0131 04:27:01.111211 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:27:14 crc kubenswrapper[4827]: I0131 04:27:14.111442 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:27:14 crc kubenswrapper[4827]: E0131 04:27:14.112710 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:27:25 crc kubenswrapper[4827]: I0131 04:27:25.111184 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:27:25 crc kubenswrapper[4827]: E0131 04:27:25.112543 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:27:37 crc kubenswrapper[4827]: I0131 04:27:37.110765 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:27:37 crc kubenswrapper[4827]: E0131 04:27:37.111812 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:27:51 crc kubenswrapper[4827]: I0131 04:27:51.644851 4827 generic.go:334] "Generic (PLEG): container finished" podID="d2764df8-6296-4363-9a0a-bad8253a8942" containerID="023cfa3eb1465bcb428159b7cf8f71d6d9606397c059987df24bf85359ae41bd" exitCode=0 Jan 31 04:27:51 crc kubenswrapper[4827]: I0131 04:27:51.644964 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" event={"ID":"d2764df8-6296-4363-9a0a-bad8253a8942","Type":"ContainerDied","Data":"023cfa3eb1465bcb428159b7cf8f71d6d9606397c059987df24bf85359ae41bd"} Jan 31 04:27:52 crc kubenswrapper[4827]: I0131 04:27:52.110215 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:27:52 crc kubenswrapper[4827]: E0131 04:27:52.110655 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.651833 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.663270 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" event={"ID":"d2764df8-6296-4363-9a0a-bad8253a8942","Type":"ContainerDied","Data":"42c2220a10139c7afde1c0f5a021ca45f7623ef10bb4b5438128c064bef84d0b"} Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.663316 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42c2220a10139c7afde1c0f5a021ca45f7623ef10bb4b5438128c064bef84d0b" Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.663361 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52fdv" Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.836840 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqhx6\" (UniqueName: \"kubernetes.io/projected/d2764df8-6296-4363-9a0a-bad8253a8942-kube-api-access-cqhx6\") pod \"d2764df8-6296-4363-9a0a-bad8253a8942\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.837754 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-ceph\") pod \"d2764df8-6296-4363-9a0a-bad8253a8942\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.837782 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-ovn-combined-ca-bundle\") pod \"d2764df8-6296-4363-9a0a-bad8253a8942\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.837806 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d2764df8-6296-4363-9a0a-bad8253a8942-ovncontroller-config-0\") pod \"d2764df8-6296-4363-9a0a-bad8253a8942\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.837824 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-inventory\") pod \"d2764df8-6296-4363-9a0a-bad8253a8942\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.837901 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-ssh-key-openstack-edpm-ipam\") pod \"d2764df8-6296-4363-9a0a-bad8253a8942\" (UID: \"d2764df8-6296-4363-9a0a-bad8253a8942\") " Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.844026 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2764df8-6296-4363-9a0a-bad8253a8942-kube-api-access-cqhx6" (OuterVolumeSpecName: "kube-api-access-cqhx6") pod "d2764df8-6296-4363-9a0a-bad8253a8942" (UID: "d2764df8-6296-4363-9a0a-bad8253a8942"). InnerVolumeSpecName "kube-api-access-cqhx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.845303 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-ceph" (OuterVolumeSpecName: "ceph") pod "d2764df8-6296-4363-9a0a-bad8253a8942" (UID: "d2764df8-6296-4363-9a0a-bad8253a8942"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.845318 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d2764df8-6296-4363-9a0a-bad8253a8942" (UID: "d2764df8-6296-4363-9a0a-bad8253a8942"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.870559 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-inventory" (OuterVolumeSpecName: "inventory") pod "d2764df8-6296-4363-9a0a-bad8253a8942" (UID: "d2764df8-6296-4363-9a0a-bad8253a8942"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.872943 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d2764df8-6296-4363-9a0a-bad8253a8942" (UID: "d2764df8-6296-4363-9a0a-bad8253a8942"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.876569 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2764df8-6296-4363-9a0a-bad8253a8942-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "d2764df8-6296-4363-9a0a-bad8253a8942" (UID: "d2764df8-6296-4363-9a0a-bad8253a8942"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.940038 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.940079 4827 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.940092 4827 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d2764df8-6296-4363-9a0a-bad8253a8942-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.940101 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.940111 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2764df8-6296-4363-9a0a-bad8253a8942-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:53 crc kubenswrapper[4827]: I0131 04:27:53.940121 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqhx6\" (UniqueName: \"kubernetes.io/projected/d2764df8-6296-4363-9a0a-bad8253a8942-kube-api-access-cqhx6\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.757339 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms"] Jan 31 04:27:54 crc kubenswrapper[4827]: E0131 04:27:54.758305 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2764df8-6296-4363-9a0a-bad8253a8942" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.758327 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2764df8-6296-4363-9a0a-bad8253a8942" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.758512 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2764df8-6296-4363-9a0a-bad8253a8942" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.759057 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.761667 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.761763 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.761934 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.761783 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.762631 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.762814 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.763634 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.775047 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms"] Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.854320 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.854619 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nh5n\" (UniqueName: \"kubernetes.io/projected/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-kube-api-access-9nh5n\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.854754 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.854977 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.855069 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.855104 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.855270 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.957457 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.957507 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nh5n\" (UniqueName: \"kubernetes.io/projected/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-kube-api-access-9nh5n\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.957537 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.957572 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.957600 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.957618 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.957676 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.962697 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.965392 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.967143 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.967372 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.969090 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.977618 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:54 crc kubenswrapper[4827]: I0131 04:27:54.984155 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nh5n\" (UniqueName: \"kubernetes.io/projected/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-kube-api-access-9nh5n\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:55 crc kubenswrapper[4827]: I0131 04:27:55.075580 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:27:55 crc kubenswrapper[4827]: I0131 04:27:55.623103 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms"] Jan 31 04:27:55 crc kubenswrapper[4827]: I0131 04:27:55.679744 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" event={"ID":"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b","Type":"ContainerStarted","Data":"83e871bdd7822aa382a92103c90e8735f1ed0f1191b57cc93bdd00bf531b1438"} Jan 31 04:27:56 crc kubenswrapper[4827]: I0131 04:27:56.691182 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" event={"ID":"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b","Type":"ContainerStarted","Data":"ccd946d86aff262158a9ab3804d7f2c011e919d20ecf523a1073ea6eedf2e511"} Jan 31 04:27:56 crc kubenswrapper[4827]: I0131 04:27:56.723947 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" podStartSLOduration=2.226853572 podStartE2EDuration="2.723904989s" podCreationTimestamp="2026-01-31 04:27:54 +0000 UTC" firstStartedPulling="2026-01-31 04:27:55.628977324 +0000 UTC m=+2468.316057783" lastFinishedPulling="2026-01-31 04:27:56.126028711 +0000 UTC m=+2468.813109200" observedRunningTime="2026-01-31 04:27:56.708561269 +0000 UTC m=+2469.395641798" watchObservedRunningTime="2026-01-31 04:27:56.723904989 +0000 UTC m=+2469.410985468" Jan 31 04:28:04 crc kubenswrapper[4827]: I0131 04:28:04.109869 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:28:04 crc kubenswrapper[4827]: E0131 04:28:04.111179 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:28:19 crc kubenswrapper[4827]: I0131 04:28:19.110285 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:28:19 crc kubenswrapper[4827]: E0131 04:28:19.111061 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:28:33 crc kubenswrapper[4827]: I0131 04:28:33.110312 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:28:33 crc kubenswrapper[4827]: E0131 04:28:33.111103 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:28:45 crc kubenswrapper[4827]: I0131 04:28:45.111106 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:28:45 crc kubenswrapper[4827]: E0131 04:28:45.112164 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:28:57 crc kubenswrapper[4827]: I0131 04:28:57.234132 4827 generic.go:334] "Generic (PLEG): container finished" podID="4aeddc0e-5ddf-42a0-8c89-e840171e5c7b" containerID="ccd946d86aff262158a9ab3804d7f2c011e919d20ecf523a1073ea6eedf2e511" exitCode=0 Jan 31 04:28:57 crc kubenswrapper[4827]: I0131 04:28:57.234351 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" event={"ID":"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b","Type":"ContainerDied","Data":"ccd946d86aff262158a9ab3804d7f2c011e919d20ecf523a1073ea6eedf2e511"} Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.118534 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:28:58 crc kubenswrapper[4827]: E0131 04:28:58.119188 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.739666 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.839074 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-ceph\") pod \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.839241 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nh5n\" (UniqueName: \"kubernetes.io/projected/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-kube-api-access-9nh5n\") pod \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.839321 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.839356 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-ssh-key-openstack-edpm-ipam\") pod \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.839469 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-neutron-metadata-combined-ca-bundle\") pod \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.839512 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-inventory\") pod \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.839541 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-nova-metadata-neutron-config-0\") pod \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\" (UID: \"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b\") " Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.845852 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "4aeddc0e-5ddf-42a0-8c89-e840171e5c7b" (UID: "4aeddc0e-5ddf-42a0-8c89-e840171e5c7b"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.856844 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-ceph" (OuterVolumeSpecName: "ceph") pod "4aeddc0e-5ddf-42a0-8c89-e840171e5c7b" (UID: "4aeddc0e-5ddf-42a0-8c89-e840171e5c7b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.856944 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-kube-api-access-9nh5n" (OuterVolumeSpecName: "kube-api-access-9nh5n") pod "4aeddc0e-5ddf-42a0-8c89-e840171e5c7b" (UID: "4aeddc0e-5ddf-42a0-8c89-e840171e5c7b"). InnerVolumeSpecName "kube-api-access-9nh5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.874627 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-inventory" (OuterVolumeSpecName: "inventory") pod "4aeddc0e-5ddf-42a0-8c89-e840171e5c7b" (UID: "4aeddc0e-5ddf-42a0-8c89-e840171e5c7b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.876233 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "4aeddc0e-5ddf-42a0-8c89-e840171e5c7b" (UID: "4aeddc0e-5ddf-42a0-8c89-e840171e5c7b"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.884933 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "4aeddc0e-5ddf-42a0-8c89-e840171e5c7b" (UID: "4aeddc0e-5ddf-42a0-8c89-e840171e5c7b"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.897933 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4aeddc0e-5ddf-42a0-8c89-e840171e5c7b" (UID: "4aeddc0e-5ddf-42a0-8c89-e840171e5c7b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.941587 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nh5n\" (UniqueName: \"kubernetes.io/projected/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-kube-api-access-9nh5n\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.941632 4827 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.941647 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.941660 4827 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.941674 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.941686 4827 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:58 crc kubenswrapper[4827]: I0131 04:28:58.941698 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4aeddc0e-5ddf-42a0-8c89-e840171e5c7b-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.258637 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" event={"ID":"4aeddc0e-5ddf-42a0-8c89-e840171e5c7b","Type":"ContainerDied","Data":"83e871bdd7822aa382a92103c90e8735f1ed0f1191b57cc93bdd00bf531b1438"} Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.258684 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83e871bdd7822aa382a92103c90e8735f1ed0f1191b57cc93bdd00bf531b1438" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.258686 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.380078 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l"] Jan 31 04:28:59 crc kubenswrapper[4827]: E0131 04:28:59.380526 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aeddc0e-5ddf-42a0-8c89-e840171e5c7b" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.380544 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aeddc0e-5ddf-42a0-8c89-e840171e5c7b" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.380739 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aeddc0e-5ddf-42a0-8c89-e840171e5c7b" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.381659 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.385055 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.385055 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.385290 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.388528 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.388573 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.389024 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.412526 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l"] Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.452150 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.452207 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.452392 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.452509 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9lp9\" (UniqueName: \"kubernetes.io/projected/7ae3b57a-e575-49ac-b40f-276d244a1855-kube-api-access-j9lp9\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.452630 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.452723 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.554508 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.554553 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.554618 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.554659 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9lp9\" (UniqueName: \"kubernetes.io/projected/7ae3b57a-e575-49ac-b40f-276d244a1855-kube-api-access-j9lp9\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.554701 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.554728 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.558462 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.558727 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.558791 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.559304 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.559544 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.572683 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9lp9\" (UniqueName: \"kubernetes.io/projected/7ae3b57a-e575-49ac-b40f-276d244a1855-kube-api-access-j9lp9\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:28:59 crc kubenswrapper[4827]: I0131 04:28:59.698367 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:29:00 crc kubenswrapper[4827]: I0131 04:29:00.195020 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l"] Jan 31 04:29:00 crc kubenswrapper[4827]: I0131 04:29:00.267962 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" event={"ID":"7ae3b57a-e575-49ac-b40f-276d244a1855","Type":"ContainerStarted","Data":"2c417eee7b5ff83b9f68e210e008fef7adfee181d46c9f80324297e4075e074e"} Jan 31 04:29:01 crc kubenswrapper[4827]: I0131 04:29:01.283372 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" event={"ID":"7ae3b57a-e575-49ac-b40f-276d244a1855","Type":"ContainerStarted","Data":"e0cb675dc1e5ab845f2925a3c0b768c17327bb58aa99cb5715fdf729597a10bf"} Jan 31 04:29:01 crc kubenswrapper[4827]: I0131 04:29:01.329895 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" podStartSLOduration=1.645415495 podStartE2EDuration="2.329869261s" podCreationTimestamp="2026-01-31 04:28:59 +0000 UTC" firstStartedPulling="2026-01-31 04:29:00.223343369 +0000 UTC m=+2532.910423868" lastFinishedPulling="2026-01-31 04:29:00.907797175 +0000 UTC m=+2533.594877634" observedRunningTime="2026-01-31 04:29:01.323044392 +0000 UTC m=+2534.010124871" watchObservedRunningTime="2026-01-31 04:29:01.329869261 +0000 UTC m=+2534.016949710" Jan 31 04:29:12 crc kubenswrapper[4827]: I0131 04:29:12.110596 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:29:12 crc kubenswrapper[4827]: E0131 04:29:12.111441 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:29:26 crc kubenswrapper[4827]: I0131 04:29:26.110080 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:29:26 crc kubenswrapper[4827]: I0131 04:29:26.540817 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"68e46ab5a80563722c5db0c1b8e9c72a18f8d8548427ec6f07e747249880a2f3"} Jan 31 04:30:00 crc kubenswrapper[4827]: I0131 04:30:00.152201 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497230-4bmhm"] Jan 31 04:30:00 crc kubenswrapper[4827]: I0131 04:30:00.155240 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-4bmhm" Jan 31 04:30:00 crc kubenswrapper[4827]: I0131 04:30:00.158641 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 04:30:00 crc kubenswrapper[4827]: I0131 04:30:00.161057 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 04:30:00 crc kubenswrapper[4827]: I0131 04:30:00.182832 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497230-4bmhm"] Jan 31 04:30:00 crc kubenswrapper[4827]: I0131 04:30:00.298976 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881-secret-volume\") pod \"collect-profiles-29497230-4bmhm\" (UID: \"ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-4bmhm" Jan 31 04:30:00 crc kubenswrapper[4827]: I0131 04:30:00.299071 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881-config-volume\") pod \"collect-profiles-29497230-4bmhm\" (UID: \"ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-4bmhm" Jan 31 04:30:00 crc kubenswrapper[4827]: I0131 04:30:00.299133 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgr7w\" (UniqueName: \"kubernetes.io/projected/ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881-kube-api-access-rgr7w\") pod \"collect-profiles-29497230-4bmhm\" (UID: \"ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-4bmhm" Jan 31 04:30:00 crc kubenswrapper[4827]: I0131 04:30:00.400396 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881-secret-volume\") pod \"collect-profiles-29497230-4bmhm\" (UID: \"ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-4bmhm" Jan 31 04:30:00 crc kubenswrapper[4827]: I0131 04:30:00.400537 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881-config-volume\") pod \"collect-profiles-29497230-4bmhm\" (UID: \"ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-4bmhm" Jan 31 04:30:00 crc kubenswrapper[4827]: I0131 04:30:00.400635 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgr7w\" (UniqueName: \"kubernetes.io/projected/ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881-kube-api-access-rgr7w\") pod \"collect-profiles-29497230-4bmhm\" (UID: \"ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-4bmhm" Jan 31 04:30:00 crc kubenswrapper[4827]: I0131 04:30:00.402044 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881-config-volume\") pod \"collect-profiles-29497230-4bmhm\" (UID: \"ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-4bmhm" Jan 31 04:30:00 crc kubenswrapper[4827]: I0131 04:30:00.407954 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881-secret-volume\") pod \"collect-profiles-29497230-4bmhm\" (UID: \"ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-4bmhm" Jan 31 04:30:00 crc kubenswrapper[4827]: I0131 04:30:00.424852 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgr7w\" (UniqueName: \"kubernetes.io/projected/ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881-kube-api-access-rgr7w\") pod \"collect-profiles-29497230-4bmhm\" (UID: \"ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-4bmhm" Jan 31 04:30:00 crc kubenswrapper[4827]: I0131 04:30:00.485663 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-4bmhm" Jan 31 04:30:00 crc kubenswrapper[4827]: I0131 04:30:00.946193 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497230-4bmhm"] Jan 31 04:30:01 crc kubenswrapper[4827]: I0131 04:30:01.898296 4827 generic.go:334] "Generic (PLEG): container finished" podID="ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881" containerID="ca399045f31bac40cc11169cad8734344a05eab29f6c62a4bfd12f4184be8a01" exitCode=0 Jan 31 04:30:01 crc kubenswrapper[4827]: I0131 04:30:01.898374 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-4bmhm" event={"ID":"ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881","Type":"ContainerDied","Data":"ca399045f31bac40cc11169cad8734344a05eab29f6c62a4bfd12f4184be8a01"} Jan 31 04:30:01 crc kubenswrapper[4827]: I0131 04:30:01.898617 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-4bmhm" event={"ID":"ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881","Type":"ContainerStarted","Data":"773de78a09e7aa77a94050ba03912424cf4fa48f73e49f7b82729e5f855f2821"} Jan 31 04:30:03 crc kubenswrapper[4827]: I0131 04:30:03.309753 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-4bmhm" Jan 31 04:30:03 crc kubenswrapper[4827]: I0131 04:30:03.365247 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgr7w\" (UniqueName: \"kubernetes.io/projected/ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881-kube-api-access-rgr7w\") pod \"ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881\" (UID: \"ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881\") " Jan 31 04:30:03 crc kubenswrapper[4827]: I0131 04:30:03.365397 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881-secret-volume\") pod \"ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881\" (UID: \"ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881\") " Jan 31 04:30:03 crc kubenswrapper[4827]: I0131 04:30:03.365541 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881-config-volume\") pod \"ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881\" (UID: \"ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881\") " Jan 31 04:30:03 crc kubenswrapper[4827]: I0131 04:30:03.366984 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881-config-volume" (OuterVolumeSpecName: "config-volume") pod "ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881" (UID: "ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:30:03 crc kubenswrapper[4827]: I0131 04:30:03.373790 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881-kube-api-access-rgr7w" (OuterVolumeSpecName: "kube-api-access-rgr7w") pod "ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881" (UID: "ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881"). InnerVolumeSpecName "kube-api-access-rgr7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:30:03 crc kubenswrapper[4827]: I0131 04:30:03.374141 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881" (UID: "ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:30:03 crc kubenswrapper[4827]: I0131 04:30:03.467515 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgr7w\" (UniqueName: \"kubernetes.io/projected/ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881-kube-api-access-rgr7w\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:03 crc kubenswrapper[4827]: I0131 04:30:03.467556 4827 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:03 crc kubenswrapper[4827]: I0131 04:30:03.467575 4827 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:03 crc kubenswrapper[4827]: I0131 04:30:03.921575 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-4bmhm" event={"ID":"ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881","Type":"ContainerDied","Data":"773de78a09e7aa77a94050ba03912424cf4fa48f73e49f7b82729e5f855f2821"} Jan 31 04:30:03 crc kubenswrapper[4827]: I0131 04:30:03.922126 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="773de78a09e7aa77a94050ba03912424cf4fa48f73e49f7b82729e5f855f2821" Jan 31 04:30:03 crc kubenswrapper[4827]: I0131 04:30:03.921627 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-4bmhm" Jan 31 04:30:04 crc kubenswrapper[4827]: I0131 04:30:04.401762 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv"] Jan 31 04:30:04 crc kubenswrapper[4827]: I0131 04:30:04.412128 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497185-4tnfv"] Jan 31 04:30:06 crc kubenswrapper[4827]: I0131 04:30:06.127210 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85e64cbb-2da9-4b05-b074-fabf16790f49" path="/var/lib/kubelet/pods/85e64cbb-2da9-4b05-b074-fabf16790f49/volumes" Jan 31 04:30:15 crc kubenswrapper[4827]: I0131 04:30:15.731124 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g8nzm"] Jan 31 04:30:15 crc kubenswrapper[4827]: E0131 04:30:15.732106 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881" containerName="collect-profiles" Jan 31 04:30:15 crc kubenswrapper[4827]: I0131 04:30:15.732121 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881" containerName="collect-profiles" Jan 31 04:30:15 crc kubenswrapper[4827]: I0131 04:30:15.732524 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881" containerName="collect-profiles" Jan 31 04:30:15 crc kubenswrapper[4827]: I0131 04:30:15.734920 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g8nzm" Jan 31 04:30:15 crc kubenswrapper[4827]: I0131 04:30:15.751069 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g8nzm"] Jan 31 04:30:15 crc kubenswrapper[4827]: I0131 04:30:15.870789 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv5nt\" (UniqueName: \"kubernetes.io/projected/a71e8868-4af0-4e5c-abfb-a22e81fd612d-kube-api-access-kv5nt\") pod \"community-operators-g8nzm\" (UID: \"a71e8868-4af0-4e5c-abfb-a22e81fd612d\") " pod="openshift-marketplace/community-operators-g8nzm" Jan 31 04:30:15 crc kubenswrapper[4827]: I0131 04:30:15.871176 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a71e8868-4af0-4e5c-abfb-a22e81fd612d-utilities\") pod \"community-operators-g8nzm\" (UID: \"a71e8868-4af0-4e5c-abfb-a22e81fd612d\") " pod="openshift-marketplace/community-operators-g8nzm" Jan 31 04:30:15 crc kubenswrapper[4827]: I0131 04:30:15.871273 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a71e8868-4af0-4e5c-abfb-a22e81fd612d-catalog-content\") pod \"community-operators-g8nzm\" (UID: \"a71e8868-4af0-4e5c-abfb-a22e81fd612d\") " pod="openshift-marketplace/community-operators-g8nzm" Jan 31 04:30:15 crc kubenswrapper[4827]: I0131 04:30:15.973221 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv5nt\" (UniqueName: \"kubernetes.io/projected/a71e8868-4af0-4e5c-abfb-a22e81fd612d-kube-api-access-kv5nt\") pod \"community-operators-g8nzm\" (UID: \"a71e8868-4af0-4e5c-abfb-a22e81fd612d\") " pod="openshift-marketplace/community-operators-g8nzm" Jan 31 04:30:15 crc kubenswrapper[4827]: I0131 04:30:15.973310 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a71e8868-4af0-4e5c-abfb-a22e81fd612d-utilities\") pod \"community-operators-g8nzm\" (UID: \"a71e8868-4af0-4e5c-abfb-a22e81fd612d\") " pod="openshift-marketplace/community-operators-g8nzm" Jan 31 04:30:15 crc kubenswrapper[4827]: I0131 04:30:15.973332 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a71e8868-4af0-4e5c-abfb-a22e81fd612d-catalog-content\") pod \"community-operators-g8nzm\" (UID: \"a71e8868-4af0-4e5c-abfb-a22e81fd612d\") " pod="openshift-marketplace/community-operators-g8nzm" Jan 31 04:30:15 crc kubenswrapper[4827]: I0131 04:30:15.973907 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a71e8868-4af0-4e5c-abfb-a22e81fd612d-catalog-content\") pod \"community-operators-g8nzm\" (UID: \"a71e8868-4af0-4e5c-abfb-a22e81fd612d\") " pod="openshift-marketplace/community-operators-g8nzm" Jan 31 04:30:15 crc kubenswrapper[4827]: I0131 04:30:15.974447 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a71e8868-4af0-4e5c-abfb-a22e81fd612d-utilities\") pod \"community-operators-g8nzm\" (UID: \"a71e8868-4af0-4e5c-abfb-a22e81fd612d\") " pod="openshift-marketplace/community-operators-g8nzm" Jan 31 04:30:16 crc kubenswrapper[4827]: I0131 04:30:16.008056 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv5nt\" (UniqueName: \"kubernetes.io/projected/a71e8868-4af0-4e5c-abfb-a22e81fd612d-kube-api-access-kv5nt\") pod \"community-operators-g8nzm\" (UID: \"a71e8868-4af0-4e5c-abfb-a22e81fd612d\") " pod="openshift-marketplace/community-operators-g8nzm" Jan 31 04:30:16 crc kubenswrapper[4827]: I0131 04:30:16.062317 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g8nzm" Jan 31 04:30:16 crc kubenswrapper[4827]: I0131 04:30:16.610493 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g8nzm"] Jan 31 04:30:17 crc kubenswrapper[4827]: I0131 04:30:17.056607 4827 generic.go:334] "Generic (PLEG): container finished" podID="a71e8868-4af0-4e5c-abfb-a22e81fd612d" containerID="c73c21115bbffe7bb7617c6ef837329b22cae2f7d3620994f0033d1c41da76cd" exitCode=0 Jan 31 04:30:17 crc kubenswrapper[4827]: I0131 04:30:17.056743 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8nzm" event={"ID":"a71e8868-4af0-4e5c-abfb-a22e81fd612d","Type":"ContainerDied","Data":"c73c21115bbffe7bb7617c6ef837329b22cae2f7d3620994f0033d1c41da76cd"} Jan 31 04:30:17 crc kubenswrapper[4827]: I0131 04:30:17.057148 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8nzm" event={"ID":"a71e8868-4af0-4e5c-abfb-a22e81fd612d","Type":"ContainerStarted","Data":"b57868249d5630cd6243170f163bd62cffda919eec735dc79a38f99ee440f2d1"} Jan 31 04:30:17 crc kubenswrapper[4827]: E0131 04:30:17.146553 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda71e8868_4af0_4e5c_abfb_a22e81fd612d.slice/crio-c73c21115bbffe7bb7617c6ef837329b22cae2f7d3620994f0033d1c41da76cd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda71e8868_4af0_4e5c_abfb_a22e81fd612d.slice/crio-conmon-c73c21115bbffe7bb7617c6ef837329b22cae2f7d3620994f0033d1c41da76cd.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:30:18 crc kubenswrapper[4827]: I0131 04:30:18.069943 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8nzm" event={"ID":"a71e8868-4af0-4e5c-abfb-a22e81fd612d","Type":"ContainerStarted","Data":"3488f783252bf3cdf120a2dbe2ebd507bb78d813cc31e71883e7122b617fa2af"} Jan 31 04:30:19 crc kubenswrapper[4827]: I0131 04:30:19.080393 4827 generic.go:334] "Generic (PLEG): container finished" podID="a71e8868-4af0-4e5c-abfb-a22e81fd612d" containerID="3488f783252bf3cdf120a2dbe2ebd507bb78d813cc31e71883e7122b617fa2af" exitCode=0 Jan 31 04:30:19 crc kubenswrapper[4827]: I0131 04:30:19.080474 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8nzm" event={"ID":"a71e8868-4af0-4e5c-abfb-a22e81fd612d","Type":"ContainerDied","Data":"3488f783252bf3cdf120a2dbe2ebd507bb78d813cc31e71883e7122b617fa2af"} Jan 31 04:30:20 crc kubenswrapper[4827]: I0131 04:30:20.099378 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8nzm" event={"ID":"a71e8868-4af0-4e5c-abfb-a22e81fd612d","Type":"ContainerStarted","Data":"c754858b9ba785b687ec444540057402e3ad5cb59faf824b8ce311da44aa3c46"} Jan 31 04:30:20 crc kubenswrapper[4827]: I0131 04:30:20.126589 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g8nzm" podStartSLOduration=2.651558194 podStartE2EDuration="5.126566505s" podCreationTimestamp="2026-01-31 04:30:15 +0000 UTC" firstStartedPulling="2026-01-31 04:30:17.061101265 +0000 UTC m=+2609.748181754" lastFinishedPulling="2026-01-31 04:30:19.536109596 +0000 UTC m=+2612.223190065" observedRunningTime="2026-01-31 04:30:20.119687344 +0000 UTC m=+2612.806767803" watchObservedRunningTime="2026-01-31 04:30:20.126566505 +0000 UTC m=+2612.813646994" Jan 31 04:30:26 crc kubenswrapper[4827]: I0131 04:30:26.062632 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g8nzm" Jan 31 04:30:26 crc kubenswrapper[4827]: I0131 04:30:26.064815 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g8nzm" Jan 31 04:30:26 crc kubenswrapper[4827]: I0131 04:30:26.147065 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g8nzm" Jan 31 04:30:26 crc kubenswrapper[4827]: I0131 04:30:26.241210 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g8nzm" Jan 31 04:30:26 crc kubenswrapper[4827]: I0131 04:30:26.411928 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g8nzm"] Jan 31 04:30:28 crc kubenswrapper[4827]: I0131 04:30:28.180198 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g8nzm" podUID="a71e8868-4af0-4e5c-abfb-a22e81fd612d" containerName="registry-server" containerID="cri-o://c754858b9ba785b687ec444540057402e3ad5cb59faf824b8ce311da44aa3c46" gracePeriod=2 Jan 31 04:30:28 crc kubenswrapper[4827]: I0131 04:30:28.622676 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g8nzm" Jan 31 04:30:28 crc kubenswrapper[4827]: I0131 04:30:28.690496 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv5nt\" (UniqueName: \"kubernetes.io/projected/a71e8868-4af0-4e5c-abfb-a22e81fd612d-kube-api-access-kv5nt\") pod \"a71e8868-4af0-4e5c-abfb-a22e81fd612d\" (UID: \"a71e8868-4af0-4e5c-abfb-a22e81fd612d\") " Jan 31 04:30:28 crc kubenswrapper[4827]: I0131 04:30:28.690597 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a71e8868-4af0-4e5c-abfb-a22e81fd612d-utilities\") pod \"a71e8868-4af0-4e5c-abfb-a22e81fd612d\" (UID: \"a71e8868-4af0-4e5c-abfb-a22e81fd612d\") " Jan 31 04:30:28 crc kubenswrapper[4827]: I0131 04:30:28.690702 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a71e8868-4af0-4e5c-abfb-a22e81fd612d-catalog-content\") pod \"a71e8868-4af0-4e5c-abfb-a22e81fd612d\" (UID: \"a71e8868-4af0-4e5c-abfb-a22e81fd612d\") " Jan 31 04:30:28 crc kubenswrapper[4827]: I0131 04:30:28.691917 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a71e8868-4af0-4e5c-abfb-a22e81fd612d-utilities" (OuterVolumeSpecName: "utilities") pod "a71e8868-4af0-4e5c-abfb-a22e81fd612d" (UID: "a71e8868-4af0-4e5c-abfb-a22e81fd612d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:30:28 crc kubenswrapper[4827]: I0131 04:30:28.697421 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a71e8868-4af0-4e5c-abfb-a22e81fd612d-kube-api-access-kv5nt" (OuterVolumeSpecName: "kube-api-access-kv5nt") pod "a71e8868-4af0-4e5c-abfb-a22e81fd612d" (UID: "a71e8868-4af0-4e5c-abfb-a22e81fd612d"). InnerVolumeSpecName "kube-api-access-kv5nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:30:28 crc kubenswrapper[4827]: I0131 04:30:28.751352 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a71e8868-4af0-4e5c-abfb-a22e81fd612d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a71e8868-4af0-4e5c-abfb-a22e81fd612d" (UID: "a71e8868-4af0-4e5c-abfb-a22e81fd612d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:30:28 crc kubenswrapper[4827]: I0131 04:30:28.793180 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a71e8868-4af0-4e5c-abfb-a22e81fd612d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:28 crc kubenswrapper[4827]: I0131 04:30:28.793213 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv5nt\" (UniqueName: \"kubernetes.io/projected/a71e8868-4af0-4e5c-abfb-a22e81fd612d-kube-api-access-kv5nt\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:28 crc kubenswrapper[4827]: I0131 04:30:28.793229 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a71e8868-4af0-4e5c-abfb-a22e81fd612d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:29 crc kubenswrapper[4827]: I0131 04:30:29.203982 4827 generic.go:334] "Generic (PLEG): container finished" podID="a71e8868-4af0-4e5c-abfb-a22e81fd612d" containerID="c754858b9ba785b687ec444540057402e3ad5cb59faf824b8ce311da44aa3c46" exitCode=0 Jan 31 04:30:29 crc kubenswrapper[4827]: I0131 04:30:29.204044 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8nzm" event={"ID":"a71e8868-4af0-4e5c-abfb-a22e81fd612d","Type":"ContainerDied","Data":"c754858b9ba785b687ec444540057402e3ad5cb59faf824b8ce311da44aa3c46"} Jan 31 04:30:29 crc kubenswrapper[4827]: I0131 04:30:29.204084 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8nzm" event={"ID":"a71e8868-4af0-4e5c-abfb-a22e81fd612d","Type":"ContainerDied","Data":"b57868249d5630cd6243170f163bd62cffda919eec735dc79a38f99ee440f2d1"} Jan 31 04:30:29 crc kubenswrapper[4827]: I0131 04:30:29.204116 4827 scope.go:117] "RemoveContainer" containerID="c754858b9ba785b687ec444540057402e3ad5cb59faf824b8ce311da44aa3c46" Jan 31 04:30:29 crc kubenswrapper[4827]: I0131 04:30:29.204437 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g8nzm" Jan 31 04:30:29 crc kubenswrapper[4827]: I0131 04:30:29.238802 4827 scope.go:117] "RemoveContainer" containerID="3488f783252bf3cdf120a2dbe2ebd507bb78d813cc31e71883e7122b617fa2af" Jan 31 04:30:29 crc kubenswrapper[4827]: I0131 04:30:29.267518 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g8nzm"] Jan 31 04:30:29 crc kubenswrapper[4827]: I0131 04:30:29.283093 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g8nzm"] Jan 31 04:30:29 crc kubenswrapper[4827]: I0131 04:30:29.291721 4827 scope.go:117] "RemoveContainer" containerID="c73c21115bbffe7bb7617c6ef837329b22cae2f7d3620994f0033d1c41da76cd" Jan 31 04:30:29 crc kubenswrapper[4827]: I0131 04:30:29.337579 4827 scope.go:117] "RemoveContainer" containerID="c754858b9ba785b687ec444540057402e3ad5cb59faf824b8ce311da44aa3c46" Jan 31 04:30:29 crc kubenswrapper[4827]: E0131 04:30:29.338462 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c754858b9ba785b687ec444540057402e3ad5cb59faf824b8ce311da44aa3c46\": container with ID starting with c754858b9ba785b687ec444540057402e3ad5cb59faf824b8ce311da44aa3c46 not found: ID does not exist" containerID="c754858b9ba785b687ec444540057402e3ad5cb59faf824b8ce311da44aa3c46" Jan 31 04:30:29 crc kubenswrapper[4827]: I0131 04:30:29.338518 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c754858b9ba785b687ec444540057402e3ad5cb59faf824b8ce311da44aa3c46"} err="failed to get container status \"c754858b9ba785b687ec444540057402e3ad5cb59faf824b8ce311da44aa3c46\": rpc error: code = NotFound desc = could not find container \"c754858b9ba785b687ec444540057402e3ad5cb59faf824b8ce311da44aa3c46\": container with ID starting with c754858b9ba785b687ec444540057402e3ad5cb59faf824b8ce311da44aa3c46 not found: ID does not exist" Jan 31 04:30:29 crc kubenswrapper[4827]: I0131 04:30:29.338551 4827 scope.go:117] "RemoveContainer" containerID="3488f783252bf3cdf120a2dbe2ebd507bb78d813cc31e71883e7122b617fa2af" Jan 31 04:30:29 crc kubenswrapper[4827]: E0131 04:30:29.339272 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3488f783252bf3cdf120a2dbe2ebd507bb78d813cc31e71883e7122b617fa2af\": container with ID starting with 3488f783252bf3cdf120a2dbe2ebd507bb78d813cc31e71883e7122b617fa2af not found: ID does not exist" containerID="3488f783252bf3cdf120a2dbe2ebd507bb78d813cc31e71883e7122b617fa2af" Jan 31 04:30:29 crc kubenswrapper[4827]: I0131 04:30:29.339332 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3488f783252bf3cdf120a2dbe2ebd507bb78d813cc31e71883e7122b617fa2af"} err="failed to get container status \"3488f783252bf3cdf120a2dbe2ebd507bb78d813cc31e71883e7122b617fa2af\": rpc error: code = NotFound desc = could not find container \"3488f783252bf3cdf120a2dbe2ebd507bb78d813cc31e71883e7122b617fa2af\": container with ID starting with 3488f783252bf3cdf120a2dbe2ebd507bb78d813cc31e71883e7122b617fa2af not found: ID does not exist" Jan 31 04:30:29 crc kubenswrapper[4827]: I0131 04:30:29.339373 4827 scope.go:117] "RemoveContainer" containerID="c73c21115bbffe7bb7617c6ef837329b22cae2f7d3620994f0033d1c41da76cd" Jan 31 04:30:29 crc kubenswrapper[4827]: E0131 04:30:29.340004 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c73c21115bbffe7bb7617c6ef837329b22cae2f7d3620994f0033d1c41da76cd\": container with ID starting with c73c21115bbffe7bb7617c6ef837329b22cae2f7d3620994f0033d1c41da76cd not found: ID does not exist" containerID="c73c21115bbffe7bb7617c6ef837329b22cae2f7d3620994f0033d1c41da76cd" Jan 31 04:30:29 crc kubenswrapper[4827]: I0131 04:30:29.340076 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c73c21115bbffe7bb7617c6ef837329b22cae2f7d3620994f0033d1c41da76cd"} err="failed to get container status \"c73c21115bbffe7bb7617c6ef837329b22cae2f7d3620994f0033d1c41da76cd\": rpc error: code = NotFound desc = could not find container \"c73c21115bbffe7bb7617c6ef837329b22cae2f7d3620994f0033d1c41da76cd\": container with ID starting with c73c21115bbffe7bb7617c6ef837329b22cae2f7d3620994f0033d1c41da76cd not found: ID does not exist" Jan 31 04:30:30 crc kubenswrapper[4827]: I0131 04:30:30.130101 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a71e8868-4af0-4e5c-abfb-a22e81fd612d" path="/var/lib/kubelet/pods/a71e8868-4af0-4e5c-abfb-a22e81fd612d/volumes" Jan 31 04:31:01 crc kubenswrapper[4827]: I0131 04:31:01.748581 4827 scope.go:117] "RemoveContainer" containerID="1305adac46bdd213152462f696bfa7537301d512e0d4472f686e0ec77334c02d" Jan 31 04:31:01 crc kubenswrapper[4827]: I0131 04:31:01.812792 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8l97f"] Jan 31 04:31:01 crc kubenswrapper[4827]: E0131 04:31:01.813201 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a71e8868-4af0-4e5c-abfb-a22e81fd612d" containerName="registry-server" Jan 31 04:31:01 crc kubenswrapper[4827]: I0131 04:31:01.813215 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="a71e8868-4af0-4e5c-abfb-a22e81fd612d" containerName="registry-server" Jan 31 04:31:01 crc kubenswrapper[4827]: E0131 04:31:01.813236 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a71e8868-4af0-4e5c-abfb-a22e81fd612d" containerName="extract-content" Jan 31 04:31:01 crc kubenswrapper[4827]: I0131 04:31:01.813244 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="a71e8868-4af0-4e5c-abfb-a22e81fd612d" containerName="extract-content" Jan 31 04:31:01 crc kubenswrapper[4827]: E0131 04:31:01.813280 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a71e8868-4af0-4e5c-abfb-a22e81fd612d" containerName="extract-utilities" Jan 31 04:31:01 crc kubenswrapper[4827]: I0131 04:31:01.813288 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="a71e8868-4af0-4e5c-abfb-a22e81fd612d" containerName="extract-utilities" Jan 31 04:31:01 crc kubenswrapper[4827]: I0131 04:31:01.813494 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="a71e8868-4af0-4e5c-abfb-a22e81fd612d" containerName="registry-server" Jan 31 04:31:01 crc kubenswrapper[4827]: I0131 04:31:01.814947 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8l97f" Jan 31 04:31:01 crc kubenswrapper[4827]: I0131 04:31:01.828377 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8l97f"] Jan 31 04:31:01 crc kubenswrapper[4827]: I0131 04:31:01.979372 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vbxm\" (UniqueName: \"kubernetes.io/projected/260e6659-75d7-41e3-af42-e378324a7785-kube-api-access-4vbxm\") pod \"redhat-operators-8l97f\" (UID: \"260e6659-75d7-41e3-af42-e378324a7785\") " pod="openshift-marketplace/redhat-operators-8l97f" Jan 31 04:31:01 crc kubenswrapper[4827]: I0131 04:31:01.979441 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/260e6659-75d7-41e3-af42-e378324a7785-utilities\") pod \"redhat-operators-8l97f\" (UID: \"260e6659-75d7-41e3-af42-e378324a7785\") " pod="openshift-marketplace/redhat-operators-8l97f" Jan 31 04:31:01 crc kubenswrapper[4827]: I0131 04:31:01.979866 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/260e6659-75d7-41e3-af42-e378324a7785-catalog-content\") pod \"redhat-operators-8l97f\" (UID: \"260e6659-75d7-41e3-af42-e378324a7785\") " pod="openshift-marketplace/redhat-operators-8l97f" Jan 31 04:31:02 crc kubenswrapper[4827]: I0131 04:31:02.082328 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/260e6659-75d7-41e3-af42-e378324a7785-catalog-content\") pod \"redhat-operators-8l97f\" (UID: \"260e6659-75d7-41e3-af42-e378324a7785\") " pod="openshift-marketplace/redhat-operators-8l97f" Jan 31 04:31:02 crc kubenswrapper[4827]: I0131 04:31:02.082419 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vbxm\" (UniqueName: \"kubernetes.io/projected/260e6659-75d7-41e3-af42-e378324a7785-kube-api-access-4vbxm\") pod \"redhat-operators-8l97f\" (UID: \"260e6659-75d7-41e3-af42-e378324a7785\") " pod="openshift-marketplace/redhat-operators-8l97f" Jan 31 04:31:02 crc kubenswrapper[4827]: I0131 04:31:02.082461 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/260e6659-75d7-41e3-af42-e378324a7785-utilities\") pod \"redhat-operators-8l97f\" (UID: \"260e6659-75d7-41e3-af42-e378324a7785\") " pod="openshift-marketplace/redhat-operators-8l97f" Jan 31 04:31:02 crc kubenswrapper[4827]: I0131 04:31:02.082925 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/260e6659-75d7-41e3-af42-e378324a7785-catalog-content\") pod \"redhat-operators-8l97f\" (UID: \"260e6659-75d7-41e3-af42-e378324a7785\") " pod="openshift-marketplace/redhat-operators-8l97f" Jan 31 04:31:02 crc kubenswrapper[4827]: I0131 04:31:02.083099 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/260e6659-75d7-41e3-af42-e378324a7785-utilities\") pod \"redhat-operators-8l97f\" (UID: \"260e6659-75d7-41e3-af42-e378324a7785\") " pod="openshift-marketplace/redhat-operators-8l97f" Jan 31 04:31:02 crc kubenswrapper[4827]: I0131 04:31:02.107451 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vbxm\" (UniqueName: \"kubernetes.io/projected/260e6659-75d7-41e3-af42-e378324a7785-kube-api-access-4vbxm\") pod \"redhat-operators-8l97f\" (UID: \"260e6659-75d7-41e3-af42-e378324a7785\") " pod="openshift-marketplace/redhat-operators-8l97f" Jan 31 04:31:02 crc kubenswrapper[4827]: I0131 04:31:02.182696 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8l97f" Jan 31 04:31:02 crc kubenswrapper[4827]: I0131 04:31:02.651957 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8l97f"] Jan 31 04:31:02 crc kubenswrapper[4827]: I0131 04:31:02.798612 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l97f" event={"ID":"260e6659-75d7-41e3-af42-e378324a7785","Type":"ContainerStarted","Data":"5d2aba7647887f5416d467f1a646f0e49b9110713facc37d23ca1609a3a7c4a1"} Jan 31 04:31:02 crc kubenswrapper[4827]: I0131 04:31:02.799002 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l97f" event={"ID":"260e6659-75d7-41e3-af42-e378324a7785","Type":"ContainerStarted","Data":"401251d9bb9a8e2238f7cca38d6eee6b6f251fc8bcde4a786734f2d58772d347"} Jan 31 04:31:03 crc kubenswrapper[4827]: I0131 04:31:03.808841 4827 generic.go:334] "Generic (PLEG): container finished" podID="260e6659-75d7-41e3-af42-e378324a7785" containerID="5d2aba7647887f5416d467f1a646f0e49b9110713facc37d23ca1609a3a7c4a1" exitCode=0 Jan 31 04:31:03 crc kubenswrapper[4827]: I0131 04:31:03.808936 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l97f" event={"ID":"260e6659-75d7-41e3-af42-e378324a7785","Type":"ContainerDied","Data":"5d2aba7647887f5416d467f1a646f0e49b9110713facc37d23ca1609a3a7c4a1"} Jan 31 04:31:03 crc kubenswrapper[4827]: I0131 04:31:03.811766 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:31:04 crc kubenswrapper[4827]: I0131 04:31:04.821133 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l97f" event={"ID":"260e6659-75d7-41e3-af42-e378324a7785","Type":"ContainerStarted","Data":"c1e47a8c8bb9939aff3a6545d8989bafd7d5d687a656b9f4cb99ba1da66c2fb0"} Jan 31 04:31:05 crc kubenswrapper[4827]: I0131 04:31:05.831247 4827 generic.go:334] "Generic (PLEG): container finished" podID="260e6659-75d7-41e3-af42-e378324a7785" containerID="c1e47a8c8bb9939aff3a6545d8989bafd7d5d687a656b9f4cb99ba1da66c2fb0" exitCode=0 Jan 31 04:31:05 crc kubenswrapper[4827]: I0131 04:31:05.831298 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l97f" event={"ID":"260e6659-75d7-41e3-af42-e378324a7785","Type":"ContainerDied","Data":"c1e47a8c8bb9939aff3a6545d8989bafd7d5d687a656b9f4cb99ba1da66c2fb0"} Jan 31 04:31:06 crc kubenswrapper[4827]: I0131 04:31:06.844778 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l97f" event={"ID":"260e6659-75d7-41e3-af42-e378324a7785","Type":"ContainerStarted","Data":"7da7b82dbcb760d5511ef7b123bd2eb24f99cbff89392000ff76a5d94216b4ae"} Jan 31 04:31:06 crc kubenswrapper[4827]: I0131 04:31:06.875501 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8l97f" podStartSLOduration=3.445039201 podStartE2EDuration="5.875473329s" podCreationTimestamp="2026-01-31 04:31:01 +0000 UTC" firstStartedPulling="2026-01-31 04:31:03.811544556 +0000 UTC m=+2656.498625005" lastFinishedPulling="2026-01-31 04:31:06.241978684 +0000 UTC m=+2658.929059133" observedRunningTime="2026-01-31 04:31:06.865405214 +0000 UTC m=+2659.552485733" watchObservedRunningTime="2026-01-31 04:31:06.875473329 +0000 UTC m=+2659.562553788" Jan 31 04:31:12 crc kubenswrapper[4827]: I0131 04:31:12.183209 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8l97f" Jan 31 04:31:12 crc kubenswrapper[4827]: I0131 04:31:12.183854 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8l97f" Jan 31 04:31:13 crc kubenswrapper[4827]: I0131 04:31:13.240513 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8l97f" podUID="260e6659-75d7-41e3-af42-e378324a7785" containerName="registry-server" probeResult="failure" output=< Jan 31 04:31:13 crc kubenswrapper[4827]: timeout: failed to connect service ":50051" within 1s Jan 31 04:31:13 crc kubenswrapper[4827]: > Jan 31 04:31:22 crc kubenswrapper[4827]: I0131 04:31:22.264759 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8l97f" Jan 31 04:31:22 crc kubenswrapper[4827]: I0131 04:31:22.367979 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8l97f" Jan 31 04:31:22 crc kubenswrapper[4827]: I0131 04:31:22.508660 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8l97f"] Jan 31 04:31:24 crc kubenswrapper[4827]: I0131 04:31:24.024479 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8l97f" podUID="260e6659-75d7-41e3-af42-e378324a7785" containerName="registry-server" containerID="cri-o://7da7b82dbcb760d5511ef7b123bd2eb24f99cbff89392000ff76a5d94216b4ae" gracePeriod=2 Jan 31 04:31:24 crc kubenswrapper[4827]: I0131 04:31:24.510369 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8l97f" Jan 31 04:31:24 crc kubenswrapper[4827]: I0131 04:31:24.644818 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vbxm\" (UniqueName: \"kubernetes.io/projected/260e6659-75d7-41e3-af42-e378324a7785-kube-api-access-4vbxm\") pod \"260e6659-75d7-41e3-af42-e378324a7785\" (UID: \"260e6659-75d7-41e3-af42-e378324a7785\") " Jan 31 04:31:24 crc kubenswrapper[4827]: I0131 04:31:24.645218 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/260e6659-75d7-41e3-af42-e378324a7785-catalog-content\") pod \"260e6659-75d7-41e3-af42-e378324a7785\" (UID: \"260e6659-75d7-41e3-af42-e378324a7785\") " Jan 31 04:31:24 crc kubenswrapper[4827]: I0131 04:31:24.645341 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/260e6659-75d7-41e3-af42-e378324a7785-utilities\") pod \"260e6659-75d7-41e3-af42-e378324a7785\" (UID: \"260e6659-75d7-41e3-af42-e378324a7785\") " Jan 31 04:31:24 crc kubenswrapper[4827]: I0131 04:31:24.646106 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/260e6659-75d7-41e3-af42-e378324a7785-utilities" (OuterVolumeSpecName: "utilities") pod "260e6659-75d7-41e3-af42-e378324a7785" (UID: "260e6659-75d7-41e3-af42-e378324a7785"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:31:24 crc kubenswrapper[4827]: I0131 04:31:24.651935 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/260e6659-75d7-41e3-af42-e378324a7785-kube-api-access-4vbxm" (OuterVolumeSpecName: "kube-api-access-4vbxm") pod "260e6659-75d7-41e3-af42-e378324a7785" (UID: "260e6659-75d7-41e3-af42-e378324a7785"). InnerVolumeSpecName "kube-api-access-4vbxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:31:24 crc kubenswrapper[4827]: I0131 04:31:24.748581 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/260e6659-75d7-41e3-af42-e378324a7785-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:31:24 crc kubenswrapper[4827]: I0131 04:31:24.748903 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vbxm\" (UniqueName: \"kubernetes.io/projected/260e6659-75d7-41e3-af42-e378324a7785-kube-api-access-4vbxm\") on node \"crc\" DevicePath \"\"" Jan 31 04:31:24 crc kubenswrapper[4827]: I0131 04:31:24.791313 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/260e6659-75d7-41e3-af42-e378324a7785-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "260e6659-75d7-41e3-af42-e378324a7785" (UID: "260e6659-75d7-41e3-af42-e378324a7785"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:31:24 crc kubenswrapper[4827]: I0131 04:31:24.850518 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/260e6659-75d7-41e3-af42-e378324a7785-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:31:25 crc kubenswrapper[4827]: I0131 04:31:25.034449 4827 generic.go:334] "Generic (PLEG): container finished" podID="260e6659-75d7-41e3-af42-e378324a7785" containerID="7da7b82dbcb760d5511ef7b123bd2eb24f99cbff89392000ff76a5d94216b4ae" exitCode=0 Jan 31 04:31:25 crc kubenswrapper[4827]: I0131 04:31:25.034491 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l97f" event={"ID":"260e6659-75d7-41e3-af42-e378324a7785","Type":"ContainerDied","Data":"7da7b82dbcb760d5511ef7b123bd2eb24f99cbff89392000ff76a5d94216b4ae"} Jan 31 04:31:25 crc kubenswrapper[4827]: I0131 04:31:25.034533 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l97f" event={"ID":"260e6659-75d7-41e3-af42-e378324a7785","Type":"ContainerDied","Data":"401251d9bb9a8e2238f7cca38d6eee6b6f251fc8bcde4a786734f2d58772d347"} Jan 31 04:31:25 crc kubenswrapper[4827]: I0131 04:31:25.034550 4827 scope.go:117] "RemoveContainer" containerID="7da7b82dbcb760d5511ef7b123bd2eb24f99cbff89392000ff76a5d94216b4ae" Jan 31 04:31:25 crc kubenswrapper[4827]: I0131 04:31:25.034693 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8l97f" Jan 31 04:31:25 crc kubenswrapper[4827]: I0131 04:31:25.060197 4827 scope.go:117] "RemoveContainer" containerID="c1e47a8c8bb9939aff3a6545d8989bafd7d5d687a656b9f4cb99ba1da66c2fb0" Jan 31 04:31:25 crc kubenswrapper[4827]: I0131 04:31:25.071558 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8l97f"] Jan 31 04:31:25 crc kubenswrapper[4827]: I0131 04:31:25.083005 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8l97f"] Jan 31 04:31:25 crc kubenswrapper[4827]: I0131 04:31:25.097507 4827 scope.go:117] "RemoveContainer" containerID="5d2aba7647887f5416d467f1a646f0e49b9110713facc37d23ca1609a3a7c4a1" Jan 31 04:31:25 crc kubenswrapper[4827]: I0131 04:31:25.126335 4827 scope.go:117] "RemoveContainer" containerID="7da7b82dbcb760d5511ef7b123bd2eb24f99cbff89392000ff76a5d94216b4ae" Jan 31 04:31:25 crc kubenswrapper[4827]: E0131 04:31:25.126792 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7da7b82dbcb760d5511ef7b123bd2eb24f99cbff89392000ff76a5d94216b4ae\": container with ID starting with 7da7b82dbcb760d5511ef7b123bd2eb24f99cbff89392000ff76a5d94216b4ae not found: ID does not exist" containerID="7da7b82dbcb760d5511ef7b123bd2eb24f99cbff89392000ff76a5d94216b4ae" Jan 31 04:31:25 crc kubenswrapper[4827]: I0131 04:31:25.126820 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7da7b82dbcb760d5511ef7b123bd2eb24f99cbff89392000ff76a5d94216b4ae"} err="failed to get container status \"7da7b82dbcb760d5511ef7b123bd2eb24f99cbff89392000ff76a5d94216b4ae\": rpc error: code = NotFound desc = could not find container \"7da7b82dbcb760d5511ef7b123bd2eb24f99cbff89392000ff76a5d94216b4ae\": container with ID starting with 7da7b82dbcb760d5511ef7b123bd2eb24f99cbff89392000ff76a5d94216b4ae not found: ID does not exist" Jan 31 04:31:25 crc kubenswrapper[4827]: I0131 04:31:25.126842 4827 scope.go:117] "RemoveContainer" containerID="c1e47a8c8bb9939aff3a6545d8989bafd7d5d687a656b9f4cb99ba1da66c2fb0" Jan 31 04:31:25 crc kubenswrapper[4827]: E0131 04:31:25.129125 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1e47a8c8bb9939aff3a6545d8989bafd7d5d687a656b9f4cb99ba1da66c2fb0\": container with ID starting with c1e47a8c8bb9939aff3a6545d8989bafd7d5d687a656b9f4cb99ba1da66c2fb0 not found: ID does not exist" containerID="c1e47a8c8bb9939aff3a6545d8989bafd7d5d687a656b9f4cb99ba1da66c2fb0" Jan 31 04:31:25 crc kubenswrapper[4827]: I0131 04:31:25.129185 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1e47a8c8bb9939aff3a6545d8989bafd7d5d687a656b9f4cb99ba1da66c2fb0"} err="failed to get container status \"c1e47a8c8bb9939aff3a6545d8989bafd7d5d687a656b9f4cb99ba1da66c2fb0\": rpc error: code = NotFound desc = could not find container \"c1e47a8c8bb9939aff3a6545d8989bafd7d5d687a656b9f4cb99ba1da66c2fb0\": container with ID starting with c1e47a8c8bb9939aff3a6545d8989bafd7d5d687a656b9f4cb99ba1da66c2fb0 not found: ID does not exist" Jan 31 04:31:25 crc kubenswrapper[4827]: I0131 04:31:25.129213 4827 scope.go:117] "RemoveContainer" containerID="5d2aba7647887f5416d467f1a646f0e49b9110713facc37d23ca1609a3a7c4a1" Jan 31 04:31:25 crc kubenswrapper[4827]: E0131 04:31:25.130011 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d2aba7647887f5416d467f1a646f0e49b9110713facc37d23ca1609a3a7c4a1\": container with ID starting with 5d2aba7647887f5416d467f1a646f0e49b9110713facc37d23ca1609a3a7c4a1 not found: ID does not exist" containerID="5d2aba7647887f5416d467f1a646f0e49b9110713facc37d23ca1609a3a7c4a1" Jan 31 04:31:25 crc kubenswrapper[4827]: I0131 04:31:25.130217 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d2aba7647887f5416d467f1a646f0e49b9110713facc37d23ca1609a3a7c4a1"} err="failed to get container status \"5d2aba7647887f5416d467f1a646f0e49b9110713facc37d23ca1609a3a7c4a1\": rpc error: code = NotFound desc = could not find container \"5d2aba7647887f5416d467f1a646f0e49b9110713facc37d23ca1609a3a7c4a1\": container with ID starting with 5d2aba7647887f5416d467f1a646f0e49b9110713facc37d23ca1609a3a7c4a1 not found: ID does not exist" Jan 31 04:31:26 crc kubenswrapper[4827]: I0131 04:31:26.123328 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="260e6659-75d7-41e3-af42-e378324a7785" path="/var/lib/kubelet/pods/260e6659-75d7-41e3-af42-e378324a7785/volumes" Jan 31 04:31:47 crc kubenswrapper[4827]: I0131 04:31:47.371006 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:31:47 crc kubenswrapper[4827]: I0131 04:31:47.371635 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:32:17 crc kubenswrapper[4827]: I0131 04:32:17.371264 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:32:17 crc kubenswrapper[4827]: I0131 04:32:17.372357 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:32:47 crc kubenswrapper[4827]: I0131 04:32:47.371050 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:32:47 crc kubenswrapper[4827]: I0131 04:32:47.371701 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:32:47 crc kubenswrapper[4827]: I0131 04:32:47.371775 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 04:32:47 crc kubenswrapper[4827]: I0131 04:32:47.372878 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68e46ab5a80563722c5db0c1b8e9c72a18f8d8548427ec6f07e747249880a2f3"} pod="openshift-machine-config-operator/machine-config-daemon-jxh94" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:32:47 crc kubenswrapper[4827]: I0131 04:32:47.373005 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" containerID="cri-o://68e46ab5a80563722c5db0c1b8e9c72a18f8d8548427ec6f07e747249880a2f3" gracePeriod=600 Jan 31 04:32:47 crc kubenswrapper[4827]: I0131 04:32:47.843158 4827 generic.go:334] "Generic (PLEG): container finished" podID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerID="68e46ab5a80563722c5db0c1b8e9c72a18f8d8548427ec6f07e747249880a2f3" exitCode=0 Jan 31 04:32:47 crc kubenswrapper[4827]: I0131 04:32:47.843207 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerDied","Data":"68e46ab5a80563722c5db0c1b8e9c72a18f8d8548427ec6f07e747249880a2f3"} Jan 31 04:32:47 crc kubenswrapper[4827]: I0131 04:32:47.843430 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d"} Jan 31 04:32:47 crc kubenswrapper[4827]: I0131 04:32:47.843452 4827 scope.go:117] "RemoveContainer" containerID="86202cb505a10ccb4abc6da10b9dbc02e9e23b1f36d68806a25ab7c0fcf67d39" Jan 31 04:33:21 crc kubenswrapper[4827]: I0131 04:33:21.188217 4827 generic.go:334] "Generic (PLEG): container finished" podID="7ae3b57a-e575-49ac-b40f-276d244a1855" containerID="e0cb675dc1e5ab845f2925a3c0b768c17327bb58aa99cb5715fdf729597a10bf" exitCode=0 Jan 31 04:33:21 crc kubenswrapper[4827]: I0131 04:33:21.188284 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" event={"ID":"7ae3b57a-e575-49ac-b40f-276d244a1855","Type":"ContainerDied","Data":"e0cb675dc1e5ab845f2925a3c0b768c17327bb58aa99cb5715fdf729597a10bf"} Jan 31 04:33:22 crc kubenswrapper[4827]: I0131 04:33:22.689637 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:33:22 crc kubenswrapper[4827]: I0131 04:33:22.883182 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-inventory\") pod \"7ae3b57a-e575-49ac-b40f-276d244a1855\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " Jan 31 04:33:22 crc kubenswrapper[4827]: I0131 04:33:22.883319 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-libvirt-combined-ca-bundle\") pod \"7ae3b57a-e575-49ac-b40f-276d244a1855\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " Jan 31 04:33:22 crc kubenswrapper[4827]: I0131 04:33:22.883467 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-libvirt-secret-0\") pod \"7ae3b57a-e575-49ac-b40f-276d244a1855\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " Jan 31 04:33:22 crc kubenswrapper[4827]: I0131 04:33:22.883517 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-ssh-key-openstack-edpm-ipam\") pod \"7ae3b57a-e575-49ac-b40f-276d244a1855\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " Jan 31 04:33:22 crc kubenswrapper[4827]: I0131 04:33:22.883569 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-ceph\") pod \"7ae3b57a-e575-49ac-b40f-276d244a1855\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " Jan 31 04:33:22 crc kubenswrapper[4827]: I0131 04:33:22.883678 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9lp9\" (UniqueName: \"kubernetes.io/projected/7ae3b57a-e575-49ac-b40f-276d244a1855-kube-api-access-j9lp9\") pod \"7ae3b57a-e575-49ac-b40f-276d244a1855\" (UID: \"7ae3b57a-e575-49ac-b40f-276d244a1855\") " Jan 31 04:33:22 crc kubenswrapper[4827]: I0131 04:33:22.889447 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-ceph" (OuterVolumeSpecName: "ceph") pod "7ae3b57a-e575-49ac-b40f-276d244a1855" (UID: "7ae3b57a-e575-49ac-b40f-276d244a1855"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:33:22 crc kubenswrapper[4827]: I0131 04:33:22.890408 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "7ae3b57a-e575-49ac-b40f-276d244a1855" (UID: "7ae3b57a-e575-49ac-b40f-276d244a1855"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:33:22 crc kubenswrapper[4827]: I0131 04:33:22.891297 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae3b57a-e575-49ac-b40f-276d244a1855-kube-api-access-j9lp9" (OuterVolumeSpecName: "kube-api-access-j9lp9") pod "7ae3b57a-e575-49ac-b40f-276d244a1855" (UID: "7ae3b57a-e575-49ac-b40f-276d244a1855"). InnerVolumeSpecName "kube-api-access-j9lp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:33:22 crc kubenswrapper[4827]: I0131 04:33:22.927174 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-inventory" (OuterVolumeSpecName: "inventory") pod "7ae3b57a-e575-49ac-b40f-276d244a1855" (UID: "7ae3b57a-e575-49ac-b40f-276d244a1855"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:33:22 crc kubenswrapper[4827]: I0131 04:33:22.931517 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7ae3b57a-e575-49ac-b40f-276d244a1855" (UID: "7ae3b57a-e575-49ac-b40f-276d244a1855"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:33:22 crc kubenswrapper[4827]: I0131 04:33:22.937434 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "7ae3b57a-e575-49ac-b40f-276d244a1855" (UID: "7ae3b57a-e575-49ac-b40f-276d244a1855"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:33:22 crc kubenswrapper[4827]: I0131 04:33:22.986236 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:33:22 crc kubenswrapper[4827]: I0131 04:33:22.986289 4827 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:33:22 crc kubenswrapper[4827]: I0131 04:33:22.986314 4827 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:33:22 crc kubenswrapper[4827]: I0131 04:33:22.986335 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:33:22 crc kubenswrapper[4827]: I0131 04:33:22.986355 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7ae3b57a-e575-49ac-b40f-276d244a1855-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 04:33:22 crc kubenswrapper[4827]: I0131 04:33:22.986375 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9lp9\" (UniqueName: \"kubernetes.io/projected/7ae3b57a-e575-49ac-b40f-276d244a1855-kube-api-access-j9lp9\") on node \"crc\" DevicePath \"\"" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.210385 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" event={"ID":"7ae3b57a-e575-49ac-b40f-276d244a1855","Type":"ContainerDied","Data":"2c417eee7b5ff83b9f68e210e008fef7adfee181d46c9f80324297e4075e074e"} Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.210441 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c417eee7b5ff83b9f68e210e008fef7adfee181d46c9f80324297e4075e074e" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.210466 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.364019 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9"] Jan 31 04:33:23 crc kubenswrapper[4827]: E0131 04:33:23.364403 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="260e6659-75d7-41e3-af42-e378324a7785" containerName="registry-server" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.364422 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="260e6659-75d7-41e3-af42-e378324a7785" containerName="registry-server" Jan 31 04:33:23 crc kubenswrapper[4827]: E0131 04:33:23.364450 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="260e6659-75d7-41e3-af42-e378324a7785" containerName="extract-utilities" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.364459 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="260e6659-75d7-41e3-af42-e378324a7785" containerName="extract-utilities" Jan 31 04:33:23 crc kubenswrapper[4827]: E0131 04:33:23.364478 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae3b57a-e575-49ac-b40f-276d244a1855" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.364489 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae3b57a-e575-49ac-b40f-276d244a1855" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 04:33:23 crc kubenswrapper[4827]: E0131 04:33:23.364506 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="260e6659-75d7-41e3-af42-e378324a7785" containerName="extract-content" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.364514 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="260e6659-75d7-41e3-af42-e378324a7785" containerName="extract-content" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.364708 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae3b57a-e575-49ac-b40f-276d244a1855" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.364729 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="260e6659-75d7-41e3-af42-e378324a7785" containerName="registry-server" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.365498 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.367943 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.369358 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.369486 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.370304 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.370351 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.370444 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.370958 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.371672 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.373490 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dklwg" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.391619 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9"] Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.501325 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.501687 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.501794 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.502003 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.502134 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf89h\" (UniqueName: \"kubernetes.io/projected/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-kube-api-access-qf89h\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.502260 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.502360 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.502469 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.502566 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.502661 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.502766 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.604507 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.604681 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.604730 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.604770 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.604901 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.604965 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf89h\" (UniqueName: \"kubernetes.io/projected/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-kube-api-access-qf89h\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.605048 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.605106 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.605160 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.605214 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.605249 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.607073 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.607439 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.610647 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.610828 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.611706 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.611765 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.612709 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.613403 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.616177 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.613136 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.638140 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf89h\" (UniqueName: \"kubernetes.io/projected/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-kube-api-access-qf89h\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:23 crc kubenswrapper[4827]: I0131 04:33:23.687531 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:33:24 crc kubenswrapper[4827]: W0131 04:33:24.343222 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84a5bf6e_ec6d_457f_a76f_5566a8f4f2f8.slice/crio-c7bdeb740b6233bfb90a0378f212e75bf0059530a6f5909fb044144fce389bb7 WatchSource:0}: Error finding container c7bdeb740b6233bfb90a0378f212e75bf0059530a6f5909fb044144fce389bb7: Status 404 returned error can't find the container with id c7bdeb740b6233bfb90a0378f212e75bf0059530a6f5909fb044144fce389bb7 Jan 31 04:33:24 crc kubenswrapper[4827]: I0131 04:33:24.346176 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9"] Jan 31 04:33:25 crc kubenswrapper[4827]: I0131 04:33:25.238722 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" event={"ID":"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8","Type":"ContainerStarted","Data":"aea22bad7323ad885c6aa82229d93cb59e88c6a4a9c9cf02a3f12831d129806f"} Jan 31 04:33:25 crc kubenswrapper[4827]: I0131 04:33:25.239274 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" event={"ID":"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8","Type":"ContainerStarted","Data":"c7bdeb740b6233bfb90a0378f212e75bf0059530a6f5909fb044144fce389bb7"} Jan 31 04:33:25 crc kubenswrapper[4827]: I0131 04:33:25.277130 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" podStartSLOduration=1.805430973 podStartE2EDuration="2.277109607s" podCreationTimestamp="2026-01-31 04:33:23 +0000 UTC" firstStartedPulling="2026-01-31 04:33:24.346528217 +0000 UTC m=+2797.033608676" lastFinishedPulling="2026-01-31 04:33:24.818206821 +0000 UTC m=+2797.505287310" observedRunningTime="2026-01-31 04:33:25.267121546 +0000 UTC m=+2797.954202045" watchObservedRunningTime="2026-01-31 04:33:25.277109607 +0000 UTC m=+2797.964190056" Jan 31 04:34:47 crc kubenswrapper[4827]: I0131 04:34:47.071484 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l7tln"] Jan 31 04:34:47 crc kubenswrapper[4827]: I0131 04:34:47.074574 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7tln" Jan 31 04:34:47 crc kubenswrapper[4827]: I0131 04:34:47.088286 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l7tln"] Jan 31 04:34:47 crc kubenswrapper[4827]: I0131 04:34:47.174247 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6-utilities\") pod \"certified-operators-l7tln\" (UID: \"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6\") " pod="openshift-marketplace/certified-operators-l7tln" Jan 31 04:34:47 crc kubenswrapper[4827]: I0131 04:34:47.174412 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw74z\" (UniqueName: \"kubernetes.io/projected/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6-kube-api-access-dw74z\") pod \"certified-operators-l7tln\" (UID: \"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6\") " pod="openshift-marketplace/certified-operators-l7tln" Jan 31 04:34:47 crc kubenswrapper[4827]: I0131 04:34:47.174482 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6-catalog-content\") pod \"certified-operators-l7tln\" (UID: \"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6\") " pod="openshift-marketplace/certified-operators-l7tln" Jan 31 04:34:47 crc kubenswrapper[4827]: I0131 04:34:47.275709 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6-catalog-content\") pod \"certified-operators-l7tln\" (UID: \"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6\") " pod="openshift-marketplace/certified-operators-l7tln" Jan 31 04:34:47 crc kubenswrapper[4827]: I0131 04:34:47.275768 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6-utilities\") pod \"certified-operators-l7tln\" (UID: \"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6\") " pod="openshift-marketplace/certified-operators-l7tln" Jan 31 04:34:47 crc kubenswrapper[4827]: I0131 04:34:47.275955 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw74z\" (UniqueName: \"kubernetes.io/projected/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6-kube-api-access-dw74z\") pod \"certified-operators-l7tln\" (UID: \"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6\") " pod="openshift-marketplace/certified-operators-l7tln" Jan 31 04:34:47 crc kubenswrapper[4827]: I0131 04:34:47.277109 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6-catalog-content\") pod \"certified-operators-l7tln\" (UID: \"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6\") " pod="openshift-marketplace/certified-operators-l7tln" Jan 31 04:34:47 crc kubenswrapper[4827]: I0131 04:34:47.277399 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6-utilities\") pod \"certified-operators-l7tln\" (UID: \"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6\") " pod="openshift-marketplace/certified-operators-l7tln" Jan 31 04:34:47 crc kubenswrapper[4827]: I0131 04:34:47.300078 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw74z\" (UniqueName: \"kubernetes.io/projected/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6-kube-api-access-dw74z\") pod \"certified-operators-l7tln\" (UID: \"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6\") " pod="openshift-marketplace/certified-operators-l7tln" Jan 31 04:34:47 crc kubenswrapper[4827]: I0131 04:34:47.371382 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:34:47 crc kubenswrapper[4827]: I0131 04:34:47.371439 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:34:47 crc kubenswrapper[4827]: I0131 04:34:47.454569 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7tln" Jan 31 04:34:47 crc kubenswrapper[4827]: I0131 04:34:47.965942 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l7tln"] Jan 31 04:34:48 crc kubenswrapper[4827]: I0131 04:34:48.066128 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7tln" event={"ID":"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6","Type":"ContainerStarted","Data":"8e60756d14b15fcc81ca35276acdf6800b725464c5f35ccb89cddfbcb9d26f55"} Jan 31 04:34:49 crc kubenswrapper[4827]: I0131 04:34:49.077758 4827 generic.go:334] "Generic (PLEG): container finished" podID="2898e4d3-09a6-41a1-8d41-bd0fae99a4c6" containerID="dc9f9bf36f21a084ed74d0b3b95ce05ef316369512c2dd43bfa10e0ad94d8be9" exitCode=0 Jan 31 04:34:49 crc kubenswrapper[4827]: I0131 04:34:49.077905 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7tln" event={"ID":"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6","Type":"ContainerDied","Data":"dc9f9bf36f21a084ed74d0b3b95ce05ef316369512c2dd43bfa10e0ad94d8be9"} Jan 31 04:34:50 crc kubenswrapper[4827]: I0131 04:34:50.087676 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7tln" event={"ID":"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6","Type":"ContainerStarted","Data":"f50acf0805c04a9b0205dcb8634ead29f54684453c1c7ce49f3f64036d93c65d"} Jan 31 04:34:51 crc kubenswrapper[4827]: I0131 04:34:51.100944 4827 generic.go:334] "Generic (PLEG): container finished" podID="2898e4d3-09a6-41a1-8d41-bd0fae99a4c6" containerID="f50acf0805c04a9b0205dcb8634ead29f54684453c1c7ce49f3f64036d93c65d" exitCode=0 Jan 31 04:34:51 crc kubenswrapper[4827]: I0131 04:34:51.101085 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7tln" event={"ID":"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6","Type":"ContainerDied","Data":"f50acf0805c04a9b0205dcb8634ead29f54684453c1c7ce49f3f64036d93c65d"} Jan 31 04:34:51 crc kubenswrapper[4827]: I0131 04:34:51.470255 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z5svf"] Jan 31 04:34:51 crc kubenswrapper[4827]: I0131 04:34:51.472781 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5svf" Jan 31 04:34:51 crc kubenswrapper[4827]: I0131 04:34:51.492710 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5svf"] Jan 31 04:34:51 crc kubenswrapper[4827]: I0131 04:34:51.659782 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thsc8\" (UniqueName: \"kubernetes.io/projected/6079b594-a263-4e4b-a6da-b3cbcd56a091-kube-api-access-thsc8\") pod \"redhat-marketplace-z5svf\" (UID: \"6079b594-a263-4e4b-a6da-b3cbcd56a091\") " pod="openshift-marketplace/redhat-marketplace-z5svf" Jan 31 04:34:51 crc kubenswrapper[4827]: I0131 04:34:51.659852 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6079b594-a263-4e4b-a6da-b3cbcd56a091-catalog-content\") pod \"redhat-marketplace-z5svf\" (UID: \"6079b594-a263-4e4b-a6da-b3cbcd56a091\") " pod="openshift-marketplace/redhat-marketplace-z5svf" Jan 31 04:34:51 crc kubenswrapper[4827]: I0131 04:34:51.660052 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6079b594-a263-4e4b-a6da-b3cbcd56a091-utilities\") pod \"redhat-marketplace-z5svf\" (UID: \"6079b594-a263-4e4b-a6da-b3cbcd56a091\") " pod="openshift-marketplace/redhat-marketplace-z5svf" Jan 31 04:34:51 crc kubenswrapper[4827]: I0131 04:34:51.761741 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thsc8\" (UniqueName: \"kubernetes.io/projected/6079b594-a263-4e4b-a6da-b3cbcd56a091-kube-api-access-thsc8\") pod \"redhat-marketplace-z5svf\" (UID: \"6079b594-a263-4e4b-a6da-b3cbcd56a091\") " pod="openshift-marketplace/redhat-marketplace-z5svf" Jan 31 04:34:51 crc kubenswrapper[4827]: I0131 04:34:51.761800 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6079b594-a263-4e4b-a6da-b3cbcd56a091-catalog-content\") pod \"redhat-marketplace-z5svf\" (UID: \"6079b594-a263-4e4b-a6da-b3cbcd56a091\") " pod="openshift-marketplace/redhat-marketplace-z5svf" Jan 31 04:34:51 crc kubenswrapper[4827]: I0131 04:34:51.761925 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6079b594-a263-4e4b-a6da-b3cbcd56a091-utilities\") pod \"redhat-marketplace-z5svf\" (UID: \"6079b594-a263-4e4b-a6da-b3cbcd56a091\") " pod="openshift-marketplace/redhat-marketplace-z5svf" Jan 31 04:34:51 crc kubenswrapper[4827]: I0131 04:34:51.762462 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6079b594-a263-4e4b-a6da-b3cbcd56a091-utilities\") pod \"redhat-marketplace-z5svf\" (UID: \"6079b594-a263-4e4b-a6da-b3cbcd56a091\") " pod="openshift-marketplace/redhat-marketplace-z5svf" Jan 31 04:34:51 crc kubenswrapper[4827]: I0131 04:34:51.762690 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6079b594-a263-4e4b-a6da-b3cbcd56a091-catalog-content\") pod \"redhat-marketplace-z5svf\" (UID: \"6079b594-a263-4e4b-a6da-b3cbcd56a091\") " pod="openshift-marketplace/redhat-marketplace-z5svf" Jan 31 04:34:51 crc kubenswrapper[4827]: I0131 04:34:51.786290 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thsc8\" (UniqueName: \"kubernetes.io/projected/6079b594-a263-4e4b-a6da-b3cbcd56a091-kube-api-access-thsc8\") pod \"redhat-marketplace-z5svf\" (UID: \"6079b594-a263-4e4b-a6da-b3cbcd56a091\") " pod="openshift-marketplace/redhat-marketplace-z5svf" Jan 31 04:34:51 crc kubenswrapper[4827]: I0131 04:34:51.810503 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5svf" Jan 31 04:34:52 crc kubenswrapper[4827]: I0131 04:34:52.079593 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5svf"] Jan 31 04:34:52 crc kubenswrapper[4827]: W0131 04:34:52.084366 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6079b594_a263_4e4b_a6da_b3cbcd56a091.slice/crio-a5c657c0c109f0f85cb021a45f73c042d0840a6eae84a6044fee6ca28634472b WatchSource:0}: Error finding container a5c657c0c109f0f85cb021a45f73c042d0840a6eae84a6044fee6ca28634472b: Status 404 returned error can't find the container with id a5c657c0c109f0f85cb021a45f73c042d0840a6eae84a6044fee6ca28634472b Jan 31 04:34:52 crc kubenswrapper[4827]: I0131 04:34:52.119110 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7tln" event={"ID":"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6","Type":"ContainerStarted","Data":"75bfed44dd95ceed50afe7a930af636def537b9d8186dac10cbbae1693168033"} Jan 31 04:34:52 crc kubenswrapper[4827]: I0131 04:34:52.121953 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5svf" event={"ID":"6079b594-a263-4e4b-a6da-b3cbcd56a091","Type":"ContainerStarted","Data":"a5c657c0c109f0f85cb021a45f73c042d0840a6eae84a6044fee6ca28634472b"} Jan 31 04:34:52 crc kubenswrapper[4827]: I0131 04:34:52.141316 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l7tln" podStartSLOduration=2.7134356889999998 podStartE2EDuration="5.141298569s" podCreationTimestamp="2026-01-31 04:34:47 +0000 UTC" firstStartedPulling="2026-01-31 04:34:49.079865946 +0000 UTC m=+2881.766946425" lastFinishedPulling="2026-01-31 04:34:51.507728856 +0000 UTC m=+2884.194809305" observedRunningTime="2026-01-31 04:34:52.140249838 +0000 UTC m=+2884.827330297" watchObservedRunningTime="2026-01-31 04:34:52.141298569 +0000 UTC m=+2884.828379018" Jan 31 04:34:53 crc kubenswrapper[4827]: I0131 04:34:53.131302 4827 generic.go:334] "Generic (PLEG): container finished" podID="6079b594-a263-4e4b-a6da-b3cbcd56a091" containerID="970d900c998ef2dfc34ed9f23bb06b40bbbf227df3d5f2c222fde8cd81754cef" exitCode=0 Jan 31 04:34:53 crc kubenswrapper[4827]: I0131 04:34:53.134010 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5svf" event={"ID":"6079b594-a263-4e4b-a6da-b3cbcd56a091","Type":"ContainerDied","Data":"970d900c998ef2dfc34ed9f23bb06b40bbbf227df3d5f2c222fde8cd81754cef"} Jan 31 04:34:54 crc kubenswrapper[4827]: I0131 04:34:54.144338 4827 generic.go:334] "Generic (PLEG): container finished" podID="6079b594-a263-4e4b-a6da-b3cbcd56a091" containerID="fe3b5fd8f8620ef1e8f03d8b8e9ed975d77331b7923021266c89e6a09e04aa77" exitCode=0 Jan 31 04:34:54 crc kubenswrapper[4827]: I0131 04:34:54.144400 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5svf" event={"ID":"6079b594-a263-4e4b-a6da-b3cbcd56a091","Type":"ContainerDied","Data":"fe3b5fd8f8620ef1e8f03d8b8e9ed975d77331b7923021266c89e6a09e04aa77"} Jan 31 04:34:55 crc kubenswrapper[4827]: I0131 04:34:55.169288 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5svf" event={"ID":"6079b594-a263-4e4b-a6da-b3cbcd56a091","Type":"ContainerStarted","Data":"71df6e2069635c0bf01e8bc330b33ff7a1067047fe2d0b422e72ba13485f664d"} Jan 31 04:34:57 crc kubenswrapper[4827]: I0131 04:34:57.455174 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l7tln" Jan 31 04:34:57 crc kubenswrapper[4827]: I0131 04:34:57.455740 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l7tln" Jan 31 04:34:57 crc kubenswrapper[4827]: I0131 04:34:57.513003 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l7tln" Jan 31 04:34:57 crc kubenswrapper[4827]: I0131 04:34:57.542542 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z5svf" podStartSLOduration=4.94829998 podStartE2EDuration="6.542524569s" podCreationTimestamp="2026-01-31 04:34:51 +0000 UTC" firstStartedPulling="2026-01-31 04:34:53.135192952 +0000 UTC m=+2885.822273441" lastFinishedPulling="2026-01-31 04:34:54.729417551 +0000 UTC m=+2887.416498030" observedRunningTime="2026-01-31 04:34:55.199418214 +0000 UTC m=+2887.886498713" watchObservedRunningTime="2026-01-31 04:34:57.542524569 +0000 UTC m=+2890.229605018" Jan 31 04:34:58 crc kubenswrapper[4827]: I0131 04:34:58.306540 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l7tln" Jan 31 04:35:00 crc kubenswrapper[4827]: I0131 04:35:00.443133 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l7tln"] Jan 31 04:35:00 crc kubenswrapper[4827]: I0131 04:35:00.443738 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l7tln" podUID="2898e4d3-09a6-41a1-8d41-bd0fae99a4c6" containerName="registry-server" containerID="cri-o://75bfed44dd95ceed50afe7a930af636def537b9d8186dac10cbbae1693168033" gracePeriod=2 Jan 31 04:35:00 crc kubenswrapper[4827]: I0131 04:35:00.912839 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7tln" Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.044531 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6-utilities\") pod \"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6\" (UID: \"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6\") " Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.044712 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6-catalog-content\") pod \"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6\" (UID: \"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6\") " Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.044783 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw74z\" (UniqueName: \"kubernetes.io/projected/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6-kube-api-access-dw74z\") pod \"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6\" (UID: \"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6\") " Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.046324 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6-utilities" (OuterVolumeSpecName: "utilities") pod "2898e4d3-09a6-41a1-8d41-bd0fae99a4c6" (UID: "2898e4d3-09a6-41a1-8d41-bd0fae99a4c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.054085 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6-kube-api-access-dw74z" (OuterVolumeSpecName: "kube-api-access-dw74z") pod "2898e4d3-09a6-41a1-8d41-bd0fae99a4c6" (UID: "2898e4d3-09a6-41a1-8d41-bd0fae99a4c6"). InnerVolumeSpecName "kube-api-access-dw74z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.146949 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.146990 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw74z\" (UniqueName: \"kubernetes.io/projected/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6-kube-api-access-dw74z\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.226028 4827 generic.go:334] "Generic (PLEG): container finished" podID="2898e4d3-09a6-41a1-8d41-bd0fae99a4c6" containerID="75bfed44dd95ceed50afe7a930af636def537b9d8186dac10cbbae1693168033" exitCode=0 Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.226404 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7tln" event={"ID":"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6","Type":"ContainerDied","Data":"75bfed44dd95ceed50afe7a930af636def537b9d8186dac10cbbae1693168033"} Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.226643 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7tln" Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.227126 4827 scope.go:117] "RemoveContainer" containerID="75bfed44dd95ceed50afe7a930af636def537b9d8186dac10cbbae1693168033" Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.226974 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7tln" event={"ID":"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6","Type":"ContainerDied","Data":"8e60756d14b15fcc81ca35276acdf6800b725464c5f35ccb89cddfbcb9d26f55"} Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.249199 4827 scope.go:117] "RemoveContainer" containerID="f50acf0805c04a9b0205dcb8634ead29f54684453c1c7ce49f3f64036d93c65d" Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.275307 4827 scope.go:117] "RemoveContainer" containerID="dc9f9bf36f21a084ed74d0b3b95ce05ef316369512c2dd43bfa10e0ad94d8be9" Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.314693 4827 scope.go:117] "RemoveContainer" containerID="75bfed44dd95ceed50afe7a930af636def537b9d8186dac10cbbae1693168033" Jan 31 04:35:01 crc kubenswrapper[4827]: E0131 04:35:01.315321 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75bfed44dd95ceed50afe7a930af636def537b9d8186dac10cbbae1693168033\": container with ID starting with 75bfed44dd95ceed50afe7a930af636def537b9d8186dac10cbbae1693168033 not found: ID does not exist" containerID="75bfed44dd95ceed50afe7a930af636def537b9d8186dac10cbbae1693168033" Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.315355 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75bfed44dd95ceed50afe7a930af636def537b9d8186dac10cbbae1693168033"} err="failed to get container status \"75bfed44dd95ceed50afe7a930af636def537b9d8186dac10cbbae1693168033\": rpc error: code = NotFound desc = could not find container \"75bfed44dd95ceed50afe7a930af636def537b9d8186dac10cbbae1693168033\": container with ID starting with 75bfed44dd95ceed50afe7a930af636def537b9d8186dac10cbbae1693168033 not found: ID does not exist" Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.315374 4827 scope.go:117] "RemoveContainer" containerID="f50acf0805c04a9b0205dcb8634ead29f54684453c1c7ce49f3f64036d93c65d" Jan 31 04:35:01 crc kubenswrapper[4827]: E0131 04:35:01.315677 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f50acf0805c04a9b0205dcb8634ead29f54684453c1c7ce49f3f64036d93c65d\": container with ID starting with f50acf0805c04a9b0205dcb8634ead29f54684453c1c7ce49f3f64036d93c65d not found: ID does not exist" containerID="f50acf0805c04a9b0205dcb8634ead29f54684453c1c7ce49f3f64036d93c65d" Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.315734 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f50acf0805c04a9b0205dcb8634ead29f54684453c1c7ce49f3f64036d93c65d"} err="failed to get container status \"f50acf0805c04a9b0205dcb8634ead29f54684453c1c7ce49f3f64036d93c65d\": rpc error: code = NotFound desc = could not find container \"f50acf0805c04a9b0205dcb8634ead29f54684453c1c7ce49f3f64036d93c65d\": container with ID starting with f50acf0805c04a9b0205dcb8634ead29f54684453c1c7ce49f3f64036d93c65d not found: ID does not exist" Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.315760 4827 scope.go:117] "RemoveContainer" containerID="dc9f9bf36f21a084ed74d0b3b95ce05ef316369512c2dd43bfa10e0ad94d8be9" Jan 31 04:35:01 crc kubenswrapper[4827]: E0131 04:35:01.316132 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc9f9bf36f21a084ed74d0b3b95ce05ef316369512c2dd43bfa10e0ad94d8be9\": container with ID starting with dc9f9bf36f21a084ed74d0b3b95ce05ef316369512c2dd43bfa10e0ad94d8be9 not found: ID does not exist" containerID="dc9f9bf36f21a084ed74d0b3b95ce05ef316369512c2dd43bfa10e0ad94d8be9" Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.316165 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc9f9bf36f21a084ed74d0b3b95ce05ef316369512c2dd43bfa10e0ad94d8be9"} err="failed to get container status \"dc9f9bf36f21a084ed74d0b3b95ce05ef316369512c2dd43bfa10e0ad94d8be9\": rpc error: code = NotFound desc = could not find container \"dc9f9bf36f21a084ed74d0b3b95ce05ef316369512c2dd43bfa10e0ad94d8be9\": container with ID starting with dc9f9bf36f21a084ed74d0b3b95ce05ef316369512c2dd43bfa10e0ad94d8be9 not found: ID does not exist" Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.656970 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2898e4d3-09a6-41a1-8d41-bd0fae99a4c6" (UID: "2898e4d3-09a6-41a1-8d41-bd0fae99a4c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.657991 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6-catalog-content\") pod \"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6\" (UID: \"2898e4d3-09a6-41a1-8d41-bd0fae99a4c6\") " Jan 31 04:35:01 crc kubenswrapper[4827]: W0131 04:35:01.658214 4827 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6/volumes/kubernetes.io~empty-dir/catalog-content Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.658256 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2898e4d3-09a6-41a1-8d41-bd0fae99a4c6" (UID: "2898e4d3-09a6-41a1-8d41-bd0fae99a4c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.658716 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.810863 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z5svf" Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.811126 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z5svf" Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.881671 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l7tln"] Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.891605 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l7tln"] Jan 31 04:35:01 crc kubenswrapper[4827]: I0131 04:35:01.896575 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z5svf" Jan 31 04:35:02 crc kubenswrapper[4827]: I0131 04:35:02.124462 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2898e4d3-09a6-41a1-8d41-bd0fae99a4c6" path="/var/lib/kubelet/pods/2898e4d3-09a6-41a1-8d41-bd0fae99a4c6/volumes" Jan 31 04:35:02 crc kubenswrapper[4827]: I0131 04:35:02.307715 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z5svf" Jan 31 04:35:04 crc kubenswrapper[4827]: I0131 04:35:04.247101 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5svf"] Jan 31 04:35:04 crc kubenswrapper[4827]: I0131 04:35:04.260002 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z5svf" podUID="6079b594-a263-4e4b-a6da-b3cbcd56a091" containerName="registry-server" containerID="cri-o://71df6e2069635c0bf01e8bc330b33ff7a1067047fe2d0b422e72ba13485f664d" gracePeriod=2 Jan 31 04:35:04 crc kubenswrapper[4827]: I0131 04:35:04.789107 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5svf" Jan 31 04:35:04 crc kubenswrapper[4827]: I0131 04:35:04.922379 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6079b594-a263-4e4b-a6da-b3cbcd56a091-catalog-content\") pod \"6079b594-a263-4e4b-a6da-b3cbcd56a091\" (UID: \"6079b594-a263-4e4b-a6da-b3cbcd56a091\") " Jan 31 04:35:04 crc kubenswrapper[4827]: I0131 04:35:04.922735 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thsc8\" (UniqueName: \"kubernetes.io/projected/6079b594-a263-4e4b-a6da-b3cbcd56a091-kube-api-access-thsc8\") pod \"6079b594-a263-4e4b-a6da-b3cbcd56a091\" (UID: \"6079b594-a263-4e4b-a6da-b3cbcd56a091\") " Jan 31 04:35:04 crc kubenswrapper[4827]: I0131 04:35:04.922995 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6079b594-a263-4e4b-a6da-b3cbcd56a091-utilities\") pod \"6079b594-a263-4e4b-a6da-b3cbcd56a091\" (UID: \"6079b594-a263-4e4b-a6da-b3cbcd56a091\") " Jan 31 04:35:04 crc kubenswrapper[4827]: I0131 04:35:04.925031 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6079b594-a263-4e4b-a6da-b3cbcd56a091-utilities" (OuterVolumeSpecName: "utilities") pod "6079b594-a263-4e4b-a6da-b3cbcd56a091" (UID: "6079b594-a263-4e4b-a6da-b3cbcd56a091"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:35:04 crc kubenswrapper[4827]: I0131 04:35:04.930451 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6079b594-a263-4e4b-a6da-b3cbcd56a091-kube-api-access-thsc8" (OuterVolumeSpecName: "kube-api-access-thsc8") pod "6079b594-a263-4e4b-a6da-b3cbcd56a091" (UID: "6079b594-a263-4e4b-a6da-b3cbcd56a091"). InnerVolumeSpecName "kube-api-access-thsc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:35:04 crc kubenswrapper[4827]: I0131 04:35:04.968726 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6079b594-a263-4e4b-a6da-b3cbcd56a091-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6079b594-a263-4e4b-a6da-b3cbcd56a091" (UID: "6079b594-a263-4e4b-a6da-b3cbcd56a091"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:35:05 crc kubenswrapper[4827]: I0131 04:35:05.026374 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6079b594-a263-4e4b-a6da-b3cbcd56a091-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:05 crc kubenswrapper[4827]: I0131 04:35:05.026410 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6079b594-a263-4e4b-a6da-b3cbcd56a091-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:05 crc kubenswrapper[4827]: I0131 04:35:05.026424 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thsc8\" (UniqueName: \"kubernetes.io/projected/6079b594-a263-4e4b-a6da-b3cbcd56a091-kube-api-access-thsc8\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:05 crc kubenswrapper[4827]: I0131 04:35:05.272334 4827 generic.go:334] "Generic (PLEG): container finished" podID="6079b594-a263-4e4b-a6da-b3cbcd56a091" containerID="71df6e2069635c0bf01e8bc330b33ff7a1067047fe2d0b422e72ba13485f664d" exitCode=0 Jan 31 04:35:05 crc kubenswrapper[4827]: I0131 04:35:05.272399 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5svf" event={"ID":"6079b594-a263-4e4b-a6da-b3cbcd56a091","Type":"ContainerDied","Data":"71df6e2069635c0bf01e8bc330b33ff7a1067047fe2d0b422e72ba13485f664d"} Jan 31 04:35:05 crc kubenswrapper[4827]: I0131 04:35:05.272443 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5svf" Jan 31 04:35:05 crc kubenswrapper[4827]: I0131 04:35:05.272489 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5svf" event={"ID":"6079b594-a263-4e4b-a6da-b3cbcd56a091","Type":"ContainerDied","Data":"a5c657c0c109f0f85cb021a45f73c042d0840a6eae84a6044fee6ca28634472b"} Jan 31 04:35:05 crc kubenswrapper[4827]: I0131 04:35:05.272531 4827 scope.go:117] "RemoveContainer" containerID="71df6e2069635c0bf01e8bc330b33ff7a1067047fe2d0b422e72ba13485f664d" Jan 31 04:35:05 crc kubenswrapper[4827]: I0131 04:35:05.311442 4827 scope.go:117] "RemoveContainer" containerID="fe3b5fd8f8620ef1e8f03d8b8e9ed975d77331b7923021266c89e6a09e04aa77" Jan 31 04:35:05 crc kubenswrapper[4827]: I0131 04:35:05.346402 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5svf"] Jan 31 04:35:05 crc kubenswrapper[4827]: I0131 04:35:05.353562 4827 scope.go:117] "RemoveContainer" containerID="970d900c998ef2dfc34ed9f23bb06b40bbbf227df3d5f2c222fde8cd81754cef" Jan 31 04:35:05 crc kubenswrapper[4827]: I0131 04:35:05.362691 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5svf"] Jan 31 04:35:05 crc kubenswrapper[4827]: I0131 04:35:05.401582 4827 scope.go:117] "RemoveContainer" containerID="71df6e2069635c0bf01e8bc330b33ff7a1067047fe2d0b422e72ba13485f664d" Jan 31 04:35:05 crc kubenswrapper[4827]: E0131 04:35:05.402147 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71df6e2069635c0bf01e8bc330b33ff7a1067047fe2d0b422e72ba13485f664d\": container with ID starting with 71df6e2069635c0bf01e8bc330b33ff7a1067047fe2d0b422e72ba13485f664d not found: ID does not exist" containerID="71df6e2069635c0bf01e8bc330b33ff7a1067047fe2d0b422e72ba13485f664d" Jan 31 04:35:05 crc kubenswrapper[4827]: I0131 04:35:05.402193 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71df6e2069635c0bf01e8bc330b33ff7a1067047fe2d0b422e72ba13485f664d"} err="failed to get container status \"71df6e2069635c0bf01e8bc330b33ff7a1067047fe2d0b422e72ba13485f664d\": rpc error: code = NotFound desc = could not find container \"71df6e2069635c0bf01e8bc330b33ff7a1067047fe2d0b422e72ba13485f664d\": container with ID starting with 71df6e2069635c0bf01e8bc330b33ff7a1067047fe2d0b422e72ba13485f664d not found: ID does not exist" Jan 31 04:35:05 crc kubenswrapper[4827]: I0131 04:35:05.402288 4827 scope.go:117] "RemoveContainer" containerID="fe3b5fd8f8620ef1e8f03d8b8e9ed975d77331b7923021266c89e6a09e04aa77" Jan 31 04:35:05 crc kubenswrapper[4827]: E0131 04:35:05.402736 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe3b5fd8f8620ef1e8f03d8b8e9ed975d77331b7923021266c89e6a09e04aa77\": container with ID starting with fe3b5fd8f8620ef1e8f03d8b8e9ed975d77331b7923021266c89e6a09e04aa77 not found: ID does not exist" containerID="fe3b5fd8f8620ef1e8f03d8b8e9ed975d77331b7923021266c89e6a09e04aa77" Jan 31 04:35:05 crc kubenswrapper[4827]: I0131 04:35:05.402814 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe3b5fd8f8620ef1e8f03d8b8e9ed975d77331b7923021266c89e6a09e04aa77"} err="failed to get container status \"fe3b5fd8f8620ef1e8f03d8b8e9ed975d77331b7923021266c89e6a09e04aa77\": rpc error: code = NotFound desc = could not find container \"fe3b5fd8f8620ef1e8f03d8b8e9ed975d77331b7923021266c89e6a09e04aa77\": container with ID starting with fe3b5fd8f8620ef1e8f03d8b8e9ed975d77331b7923021266c89e6a09e04aa77 not found: ID does not exist" Jan 31 04:35:05 crc kubenswrapper[4827]: I0131 04:35:05.402852 4827 scope.go:117] "RemoveContainer" containerID="970d900c998ef2dfc34ed9f23bb06b40bbbf227df3d5f2c222fde8cd81754cef" Jan 31 04:35:05 crc kubenswrapper[4827]: E0131 04:35:05.403301 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"970d900c998ef2dfc34ed9f23bb06b40bbbf227df3d5f2c222fde8cd81754cef\": container with ID starting with 970d900c998ef2dfc34ed9f23bb06b40bbbf227df3d5f2c222fde8cd81754cef not found: ID does not exist" containerID="970d900c998ef2dfc34ed9f23bb06b40bbbf227df3d5f2c222fde8cd81754cef" Jan 31 04:35:05 crc kubenswrapper[4827]: I0131 04:35:05.403339 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"970d900c998ef2dfc34ed9f23bb06b40bbbf227df3d5f2c222fde8cd81754cef"} err="failed to get container status \"970d900c998ef2dfc34ed9f23bb06b40bbbf227df3d5f2c222fde8cd81754cef\": rpc error: code = NotFound desc = could not find container \"970d900c998ef2dfc34ed9f23bb06b40bbbf227df3d5f2c222fde8cd81754cef\": container with ID starting with 970d900c998ef2dfc34ed9f23bb06b40bbbf227df3d5f2c222fde8cd81754cef not found: ID does not exist" Jan 31 04:35:06 crc kubenswrapper[4827]: I0131 04:35:06.131038 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6079b594-a263-4e4b-a6da-b3cbcd56a091" path="/var/lib/kubelet/pods/6079b594-a263-4e4b-a6da-b3cbcd56a091/volumes" Jan 31 04:35:17 crc kubenswrapper[4827]: I0131 04:35:17.371606 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:35:17 crc kubenswrapper[4827]: I0131 04:35:17.372300 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:35:47 crc kubenswrapper[4827]: I0131 04:35:47.371367 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:35:47 crc kubenswrapper[4827]: I0131 04:35:47.372135 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:35:47 crc kubenswrapper[4827]: I0131 04:35:47.372212 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 04:35:47 crc kubenswrapper[4827]: I0131 04:35:47.373376 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d"} pod="openshift-machine-config-operator/machine-config-daemon-jxh94" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:35:47 crc kubenswrapper[4827]: I0131 04:35:47.373476 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" containerID="cri-o://b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" gracePeriod=600 Jan 31 04:35:47 crc kubenswrapper[4827]: E0131 04:35:47.499695 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:35:47 crc kubenswrapper[4827]: I0131 04:35:47.730427 4827 generic.go:334] "Generic (PLEG): container finished" podID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" exitCode=0 Jan 31 04:35:47 crc kubenswrapper[4827]: I0131 04:35:47.730492 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerDied","Data":"b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d"} Jan 31 04:35:47 crc kubenswrapper[4827]: I0131 04:35:47.730541 4827 scope.go:117] "RemoveContainer" containerID="68e46ab5a80563722c5db0c1b8e9c72a18f8d8548427ec6f07e747249880a2f3" Jan 31 04:35:47 crc kubenswrapper[4827]: I0131 04:35:47.731726 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:35:47 crc kubenswrapper[4827]: E0131 04:35:47.732197 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:36:01 crc kubenswrapper[4827]: I0131 04:36:01.110569 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:36:01 crc kubenswrapper[4827]: E0131 04:36:01.111632 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:36:02 crc kubenswrapper[4827]: I0131 04:36:02.899365 4827 generic.go:334] "Generic (PLEG): container finished" podID="84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8" containerID="aea22bad7323ad885c6aa82229d93cb59e88c6a4a9c9cf02a3f12831d129806f" exitCode=0 Jan 31 04:36:02 crc kubenswrapper[4827]: I0131 04:36:02.899432 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" event={"ID":"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8","Type":"ContainerDied","Data":"aea22bad7323ad885c6aa82229d93cb59e88c6a4a9c9cf02a3f12831d129806f"} Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.393239 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.580656 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf89h\" (UniqueName: \"kubernetes.io/projected/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-kube-api-access-qf89h\") pod \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.580971 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-ssh-key-openstack-edpm-ipam\") pod \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.581000 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-custom-ceph-combined-ca-bundle\") pod \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.581019 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-ceph-nova-0\") pod \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.581045 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-cell1-compute-config-1\") pod \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.581078 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-cell1-compute-config-0\") pod \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.581103 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-migration-ssh-key-0\") pod \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.581121 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-migration-ssh-key-1\") pod \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.581142 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-extra-config-0\") pod \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.581185 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-inventory\") pod \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.581244 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-ceph\") pod \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\" (UID: \"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8\") " Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.600152 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-ceph" (OuterVolumeSpecName: "ceph") pod "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8" (UID: "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.608062 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8" (UID: "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.622064 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-kube-api-access-qf89h" (OuterVolumeSpecName: "kube-api-access-qf89h") pod "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8" (UID: "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8"). InnerVolumeSpecName "kube-api-access-qf89h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.670097 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8" (UID: "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.674087 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8" (UID: "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.688192 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf89h\" (UniqueName: \"kubernetes.io/projected/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-kube-api-access-qf89h\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.688224 4827 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.688234 4827 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.688244 4827 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.688254 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.688459 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8" (UID: "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.690911 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-inventory" (OuterVolumeSpecName: "inventory") pod "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8" (UID: "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.694486 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8" (UID: "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.705545 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8" (UID: "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.707273 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8" (UID: "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.712912 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8" (UID: "84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.790504 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.790547 4827 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.790563 4827 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.790575 4827 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.790589 4827 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.790602 4827 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.920843 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" event={"ID":"84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8","Type":"ContainerDied","Data":"c7bdeb740b6233bfb90a0378f212e75bf0059530a6f5909fb044144fce389bb7"} Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.920893 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7bdeb740b6233bfb90a0378f212e75bf0059530a6f5909fb044144fce389bb7" Jan 31 04:36:04 crc kubenswrapper[4827]: I0131 04:36:04.920926 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9" Jan 31 04:36:16 crc kubenswrapper[4827]: I0131 04:36:16.110006 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:36:16 crc kubenswrapper[4827]: E0131 04:36:16.111035 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:36:17 crc kubenswrapper[4827]: I0131 04:36:17.957176 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 31 04:36:17 crc kubenswrapper[4827]: E0131 04:36:17.957711 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2898e4d3-09a6-41a1-8d41-bd0fae99a4c6" containerName="registry-server" Jan 31 04:36:17 crc kubenswrapper[4827]: I0131 04:36:17.957724 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="2898e4d3-09a6-41a1-8d41-bd0fae99a4c6" containerName="registry-server" Jan 31 04:36:17 crc kubenswrapper[4827]: E0131 04:36:17.957741 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 31 04:36:17 crc kubenswrapper[4827]: I0131 04:36:17.957749 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 31 04:36:17 crc kubenswrapper[4827]: E0131 04:36:17.957763 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2898e4d3-09a6-41a1-8d41-bd0fae99a4c6" containerName="extract-utilities" Jan 31 04:36:17 crc kubenswrapper[4827]: I0131 04:36:17.957768 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="2898e4d3-09a6-41a1-8d41-bd0fae99a4c6" containerName="extract-utilities" Jan 31 04:36:17 crc kubenswrapper[4827]: E0131 04:36:17.957777 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6079b594-a263-4e4b-a6da-b3cbcd56a091" containerName="extract-content" Jan 31 04:36:17 crc kubenswrapper[4827]: I0131 04:36:17.957783 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="6079b594-a263-4e4b-a6da-b3cbcd56a091" containerName="extract-content" Jan 31 04:36:17 crc kubenswrapper[4827]: E0131 04:36:17.957792 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2898e4d3-09a6-41a1-8d41-bd0fae99a4c6" containerName="extract-content" Jan 31 04:36:17 crc kubenswrapper[4827]: I0131 04:36:17.957797 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="2898e4d3-09a6-41a1-8d41-bd0fae99a4c6" containerName="extract-content" Jan 31 04:36:17 crc kubenswrapper[4827]: E0131 04:36:17.957808 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6079b594-a263-4e4b-a6da-b3cbcd56a091" containerName="extract-utilities" Jan 31 04:36:17 crc kubenswrapper[4827]: I0131 04:36:17.957813 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="6079b594-a263-4e4b-a6da-b3cbcd56a091" containerName="extract-utilities" Jan 31 04:36:17 crc kubenswrapper[4827]: E0131 04:36:17.957826 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6079b594-a263-4e4b-a6da-b3cbcd56a091" containerName="registry-server" Jan 31 04:36:17 crc kubenswrapper[4827]: I0131 04:36:17.957831 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="6079b594-a263-4e4b-a6da-b3cbcd56a091" containerName="registry-server" Jan 31 04:36:17 crc kubenswrapper[4827]: I0131 04:36:17.958003 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="2898e4d3-09a6-41a1-8d41-bd0fae99a4c6" containerName="registry-server" Jan 31 04:36:17 crc kubenswrapper[4827]: I0131 04:36:17.958016 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 31 04:36:17 crc kubenswrapper[4827]: I0131 04:36:17.958027 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="6079b594-a263-4e4b-a6da-b3cbcd56a091" containerName="registry-server" Jan 31 04:36:17 crc kubenswrapper[4827]: I0131 04:36:17.958869 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:17 crc kubenswrapper[4827]: I0131 04:36:17.964180 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 04:36:17 crc kubenswrapper[4827]: I0131 04:36:17.965397 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 31 04:36:17 crc kubenswrapper[4827]: I0131 04:36:17.968563 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 31 04:36:17 crc kubenswrapper[4827]: I0131 04:36:17.970381 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 31 04:36:17 crc kubenswrapper[4827]: I0131 04:36:17.975354 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 31 04:36:17 crc kubenswrapper[4827]: I0131 04:36:17.980946 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 31 04:36:17 crc kubenswrapper[4827]: I0131 04:36:17.987940 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014152 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014205 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014235 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014254 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c359669-c94b-42d4-9b63-de6d4812e598-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014284 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014307 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceec568e-c3e2-4b44-b2f9-b90d9334667f-config-data-custom\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014327 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7c359669-c94b-42d4-9b63-de6d4812e598-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014349 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014369 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014402 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-dev\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014430 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-run\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014449 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-run\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014479 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ceec568e-c3e2-4b44-b2f9-b90d9334667f-ceph\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014504 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khhw9\" (UniqueName: \"kubernetes.io/projected/7c359669-c94b-42d4-9b63-de6d4812e598-kube-api-access-khhw9\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014526 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-sys\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014547 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-etc-nvme\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014565 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014596 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-dev\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014638 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rvl6\" (UniqueName: \"kubernetes.io/projected/ceec568e-c3e2-4b44-b2f9-b90d9334667f-kube-api-access-5rvl6\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014661 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceec568e-c3e2-4b44-b2f9-b90d9334667f-scripts\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014682 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014707 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-sys\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014734 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014756 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceec568e-c3e2-4b44-b2f9-b90d9334667f-config-data\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014780 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c359669-c94b-42d4-9b63-de6d4812e598-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014813 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c359669-c94b-42d4-9b63-de6d4812e598-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014840 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c359669-c94b-42d4-9b63-de6d4812e598-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014862 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014904 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceec568e-c3e2-4b44-b2f9-b90d9334667f-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014940 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014967 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-lib-modules\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.014989 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116133 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-dev\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116404 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-run\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116469 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-run\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116249 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-dev\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116481 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-run\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116610 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-run\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116633 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ceec568e-c3e2-4b44-b2f9-b90d9334667f-ceph\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116657 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khhw9\" (UniqueName: \"kubernetes.io/projected/7c359669-c94b-42d4-9b63-de6d4812e598-kube-api-access-khhw9\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116676 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-sys\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116693 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116709 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-etc-nvme\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116737 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-dev\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116771 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rvl6\" (UniqueName: \"kubernetes.io/projected/ceec568e-c3e2-4b44-b2f9-b90d9334667f-kube-api-access-5rvl6\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116786 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceec568e-c3e2-4b44-b2f9-b90d9334667f-scripts\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116802 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116804 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116826 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-sys\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116848 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116858 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116858 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-sys\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116876 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-etc-nvme\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116942 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-sys\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116989 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-dev\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.116866 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceec568e-c3e2-4b44-b2f9-b90d9334667f-config-data\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117060 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c359669-c94b-42d4-9b63-de6d4812e598-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117116 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c359669-c94b-42d4-9b63-de6d4812e598-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117133 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117159 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c359669-c94b-42d4-9b63-de6d4812e598-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117205 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117238 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceec568e-c3e2-4b44-b2f9-b90d9334667f-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117284 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117321 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-lib-modules\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117328 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117345 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117374 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117387 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117410 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-lib-modules\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117428 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117449 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117466 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c359669-c94b-42d4-9b63-de6d4812e598-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117495 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117518 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceec568e-c3e2-4b44-b2f9-b90d9334667f-config-data-custom\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117534 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7c359669-c94b-42d4-9b63-de6d4812e598-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117551 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117568 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117611 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117700 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117705 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.117783 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.119376 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.119698 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ceec568e-c3e2-4b44-b2f9-b90d9334667f-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.119854 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c359669-c94b-42d4-9b63-de6d4812e598-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.123381 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceec568e-c3e2-4b44-b2f9-b90d9334667f-scripts\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.124072 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c359669-c94b-42d4-9b63-de6d4812e598-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.125095 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c359669-c94b-42d4-9b63-de6d4812e598-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.125784 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceec568e-c3e2-4b44-b2f9-b90d9334667f-config-data\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.128610 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c359669-c94b-42d4-9b63-de6d4812e598-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.131920 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ceec568e-c3e2-4b44-b2f9-b90d9334667f-ceph\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.134333 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceec568e-c3e2-4b44-b2f9-b90d9334667f-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.139094 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c359669-c94b-42d4-9b63-de6d4812e598-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.139110 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7c359669-c94b-42d4-9b63-de6d4812e598-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.141666 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khhw9\" (UniqueName: \"kubernetes.io/projected/7c359669-c94b-42d4-9b63-de6d4812e598-kube-api-access-khhw9\") pod \"cinder-volume-volume1-0\" (UID: \"7c359669-c94b-42d4-9b63-de6d4812e598\") " pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.150820 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceec568e-c3e2-4b44-b2f9-b90d9334667f-config-data-custom\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.152420 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rvl6\" (UniqueName: \"kubernetes.io/projected/ceec568e-c3e2-4b44-b2f9-b90d9334667f-kube-api-access-5rvl6\") pod \"cinder-backup-0\" (UID: \"ceec568e-c3e2-4b44-b2f9-b90d9334667f\") " pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.282430 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.299478 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.409641 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-p5kdd"] Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.410637 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-p5kdd" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.420701 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-p5kdd"] Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.458143 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/735db063-6703-4f20-9d2f-aa79c7c56855-operator-scripts\") pod \"manila-db-create-p5kdd\" (UID: \"735db063-6703-4f20-9d2f-aa79c7c56855\") " pod="openstack/manila-db-create-p5kdd" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.458199 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-485x5\" (UniqueName: \"kubernetes.io/projected/735db063-6703-4f20-9d2f-aa79c7c56855-kube-api-access-485x5\") pod \"manila-db-create-p5kdd\" (UID: \"735db063-6703-4f20-9d2f-aa79c7c56855\") " pod="openstack/manila-db-create-p5kdd" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.546945 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-b6b4-account-create-update-gpt9p"] Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.548329 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-b6b4-account-create-update-gpt9p" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.557313 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.564492 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/735db063-6703-4f20-9d2f-aa79c7c56855-operator-scripts\") pod \"manila-db-create-p5kdd\" (UID: \"735db063-6703-4f20-9d2f-aa79c7c56855\") " pod="openstack/manila-db-create-p5kdd" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.564547 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-485x5\" (UniqueName: \"kubernetes.io/projected/735db063-6703-4f20-9d2f-aa79c7c56855-kube-api-access-485x5\") pod \"manila-db-create-p5kdd\" (UID: \"735db063-6703-4f20-9d2f-aa79c7c56855\") " pod="openstack/manila-db-create-p5kdd" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.565564 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/735db063-6703-4f20-9d2f-aa79c7c56855-operator-scripts\") pod \"manila-db-create-p5kdd\" (UID: \"735db063-6703-4f20-9d2f-aa79c7c56855\") " pod="openstack/manila-db-create-p5kdd" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.577965 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-b6b4-account-create-update-gpt9p"] Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.600494 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5649566987-dn2gq"] Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.603933 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5649566987-dn2gq" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.609188 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.609383 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-nrc9g" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.609442 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.609609 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.635294 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5649566987-dn2gq"] Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.646381 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-485x5\" (UniqueName: \"kubernetes.io/projected/735db063-6703-4f20-9d2f-aa79c7c56855-kube-api-access-485x5\") pod \"manila-db-create-p5kdd\" (UID: \"735db063-6703-4f20-9d2f-aa79c7c56855\") " pod="openstack/manila-db-create-p5kdd" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.666488 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e44c9e88-212e-4178-a1fb-ce9b1896d73f-logs\") pod \"horizon-5649566987-dn2gq\" (UID: \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\") " pod="openstack/horizon-5649566987-dn2gq" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.666571 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh7zs\" (UniqueName: \"kubernetes.io/projected/e44c9e88-212e-4178-a1fb-ce9b1896d73f-kube-api-access-jh7zs\") pod \"horizon-5649566987-dn2gq\" (UID: \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\") " pod="openstack/horizon-5649566987-dn2gq" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.666594 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lhcc\" (UniqueName: \"kubernetes.io/projected/b79385e0-f8b6-49d3-a1de-8a61ee7e52b4-kube-api-access-9lhcc\") pod \"manila-b6b4-account-create-update-gpt9p\" (UID: \"b79385e0-f8b6-49d3-a1de-8a61ee7e52b4\") " pod="openstack/manila-b6b4-account-create-update-gpt9p" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.666621 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e44c9e88-212e-4178-a1fb-ce9b1896d73f-config-data\") pod \"horizon-5649566987-dn2gq\" (UID: \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\") " pod="openstack/horizon-5649566987-dn2gq" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.666662 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b79385e0-f8b6-49d3-a1de-8a61ee7e52b4-operator-scripts\") pod \"manila-b6b4-account-create-update-gpt9p\" (UID: \"b79385e0-f8b6-49d3-a1de-8a61ee7e52b4\") " pod="openstack/manila-b6b4-account-create-update-gpt9p" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.666697 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e44c9e88-212e-4178-a1fb-ce9b1896d73f-scripts\") pod \"horizon-5649566987-dn2gq\" (UID: \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\") " pod="openstack/horizon-5649566987-dn2gq" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.666716 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e44c9e88-212e-4178-a1fb-ce9b1896d73f-horizon-secret-key\") pod \"horizon-5649566987-dn2gq\" (UID: \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\") " pod="openstack/horizon-5649566987-dn2gq" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.740860 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.742427 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.752450 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.752745 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-m4mld" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.752921 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.753052 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.772273 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh7zs\" (UniqueName: \"kubernetes.io/projected/e44c9e88-212e-4178-a1fb-ce9b1896d73f-kube-api-access-jh7zs\") pod \"horizon-5649566987-dn2gq\" (UID: \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\") " pod="openstack/horizon-5649566987-dn2gq" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.772319 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lhcc\" (UniqueName: \"kubernetes.io/projected/b79385e0-f8b6-49d3-a1de-8a61ee7e52b4-kube-api-access-9lhcc\") pod \"manila-b6b4-account-create-update-gpt9p\" (UID: \"b79385e0-f8b6-49d3-a1de-8a61ee7e52b4\") " pod="openstack/manila-b6b4-account-create-update-gpt9p" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.772352 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e44c9e88-212e-4178-a1fb-ce9b1896d73f-config-data\") pod \"horizon-5649566987-dn2gq\" (UID: \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\") " pod="openstack/horizon-5649566987-dn2gq" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.772392 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b79385e0-f8b6-49d3-a1de-8a61ee7e52b4-operator-scripts\") pod \"manila-b6b4-account-create-update-gpt9p\" (UID: \"b79385e0-f8b6-49d3-a1de-8a61ee7e52b4\") " pod="openstack/manila-b6b4-account-create-update-gpt9p" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.772430 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e44c9e88-212e-4178-a1fb-ce9b1896d73f-scripts\") pod \"horizon-5649566987-dn2gq\" (UID: \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\") " pod="openstack/horizon-5649566987-dn2gq" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.772449 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e44c9e88-212e-4178-a1fb-ce9b1896d73f-horizon-secret-key\") pod \"horizon-5649566987-dn2gq\" (UID: \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\") " pod="openstack/horizon-5649566987-dn2gq" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.772497 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e44c9e88-212e-4178-a1fb-ce9b1896d73f-logs\") pod \"horizon-5649566987-dn2gq\" (UID: \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\") " pod="openstack/horizon-5649566987-dn2gq" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.772282 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.773595 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b79385e0-f8b6-49d3-a1de-8a61ee7e52b4-operator-scripts\") pod \"manila-b6b4-account-create-update-gpt9p\" (UID: \"b79385e0-f8b6-49d3-a1de-8a61ee7e52b4\") " pod="openstack/manila-b6b4-account-create-update-gpt9p" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.773616 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e44c9e88-212e-4178-a1fb-ce9b1896d73f-scripts\") pod \"horizon-5649566987-dn2gq\" (UID: \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\") " pod="openstack/horizon-5649566987-dn2gq" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.773840 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e44c9e88-212e-4178-a1fb-ce9b1896d73f-config-data\") pod \"horizon-5649566987-dn2gq\" (UID: \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\") " pod="openstack/horizon-5649566987-dn2gq" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.785368 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-p5kdd" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.786584 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e44c9e88-212e-4178-a1fb-ce9b1896d73f-logs\") pod \"horizon-5649566987-dn2gq\" (UID: \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\") " pod="openstack/horizon-5649566987-dn2gq" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.792999 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e44c9e88-212e-4178-a1fb-ce9b1896d73f-horizon-secret-key\") pod \"horizon-5649566987-dn2gq\" (UID: \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\") " pod="openstack/horizon-5649566987-dn2gq" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.820501 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh7zs\" (UniqueName: \"kubernetes.io/projected/e44c9e88-212e-4178-a1fb-ce9b1896d73f-kube-api-access-jh7zs\") pod \"horizon-5649566987-dn2gq\" (UID: \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\") " pod="openstack/horizon-5649566987-dn2gq" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.830695 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lhcc\" (UniqueName: \"kubernetes.io/projected/b79385e0-f8b6-49d3-a1de-8a61ee7e52b4-kube-api-access-9lhcc\") pod \"manila-b6b4-account-create-update-gpt9p\" (UID: \"b79385e0-f8b6-49d3-a1de-8a61ee7e52b4\") " pod="openstack/manila-b6b4-account-create-update-gpt9p" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.837416 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5db647897-mx8gj"] Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.866977 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5db647897-mx8gj" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.912458 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-scripts\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.912779 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.912934 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2f208a22-4c88-450c-b37c-882006a3cf68-ceph\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.912955 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.912990 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4d52\" (UniqueName: \"kubernetes.io/projected/2f208a22-4c88-450c-b37c-882006a3cf68-kube-api-access-k4d52\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.913041 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.913063 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f208a22-4c88-450c-b37c-882006a3cf68-logs\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.913093 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-config-data\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.913177 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2f208a22-4c88-450c-b37c-882006a3cf68-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.926294 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5db647897-mx8gj"] Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.956026 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.957745 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.963959 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.991465 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 04:36:18 crc kubenswrapper[4827]: I0131 04:36:18.991684 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.005316 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-b6b4-account-create-update-gpt9p" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.038714 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5649566987-dn2gq" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.041133 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.041175 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f208a22-4c88-450c-b37c-882006a3cf68-logs\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.041217 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/73954232-88c2-4d80-aa50-5352f16b43a2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.041254 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-config-data\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.041280 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.041312 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-config-data\") pod \"horizon-5db647897-mx8gj\" (UID: \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\") " pod="openstack/horizon-5db647897-mx8gj" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.041342 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-logs\") pod \"horizon-5db647897-mx8gj\" (UID: \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\") " pod="openstack/horizon-5db647897-mx8gj" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.041375 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-horizon-secret-key\") pod \"horizon-5db647897-mx8gj\" (UID: \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\") " pod="openstack/horizon-5db647897-mx8gj" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.041414 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.043663 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2f208a22-4c88-450c-b37c-882006a3cf68-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.043712 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73954232-88c2-4d80-aa50-5352f16b43a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.043752 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.043790 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.043829 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.047052 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-scripts\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.047130 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-scripts\") pod \"horizon-5db647897-mx8gj\" (UID: \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\") " pod="openstack/horizon-5db647897-mx8gj" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.047206 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.047266 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73954232-88c2-4d80-aa50-5352f16b43a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.047399 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtxf5\" (UniqueName: \"kubernetes.io/projected/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-kube-api-access-vtxf5\") pod \"horizon-5db647897-mx8gj\" (UID: \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\") " pod="openstack/horizon-5db647897-mx8gj" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.047494 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2f208a22-4c88-450c-b37c-882006a3cf68-ceph\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.047546 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.047611 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4d52\" (UniqueName: \"kubernetes.io/projected/2f208a22-4c88-450c-b37c-882006a3cf68-kube-api-access-k4d52\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.047653 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb5rb\" (UniqueName: \"kubernetes.io/projected/73954232-88c2-4d80-aa50-5352f16b43a2-kube-api-access-fb5rb\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.050409 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.050436 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f208a22-4c88-450c-b37c-882006a3cf68-logs\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.051582 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.060942 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2f208a22-4c88-450c-b37c-882006a3cf68-ceph\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.064447 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2f208a22-4c88-450c-b37c-882006a3cf68-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.072337 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-scripts\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.094178 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-config-data\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.107059 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4d52\" (UniqueName: \"kubernetes.io/projected/2f208a22-4c88-450c-b37c-882006a3cf68-kube-api-access-k4d52\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.110244 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.118447 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.138627 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.149552 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73954232-88c2-4d80-aa50-5352f16b43a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.149862 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtxf5\" (UniqueName: \"kubernetes.io/projected/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-kube-api-access-vtxf5\") pod \"horizon-5db647897-mx8gj\" (UID: \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\") " pod="openstack/horizon-5db647897-mx8gj" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.150649 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73954232-88c2-4d80-aa50-5352f16b43a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.152390 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb5rb\" (UniqueName: \"kubernetes.io/projected/73954232-88c2-4d80-aa50-5352f16b43a2-kube-api-access-fb5rb\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.152728 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/73954232-88c2-4d80-aa50-5352f16b43a2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.154072 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.154481 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-config-data\") pod \"horizon-5db647897-mx8gj\" (UID: \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\") " pod="openstack/horizon-5db647897-mx8gj" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.173565 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-logs\") pod \"horizon-5db647897-mx8gj\" (UID: \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\") " pod="openstack/horizon-5db647897-mx8gj" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.174114 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-horizon-secret-key\") pod \"horizon-5db647897-mx8gj\" (UID: \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\") " pod="openstack/horizon-5db647897-mx8gj" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.173205 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-config-data\") pod \"horizon-5db647897-mx8gj\" (UID: \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\") " pod="openstack/horizon-5db647897-mx8gj" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.174031 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-logs\") pod \"horizon-5db647897-mx8gj\" (UID: \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\") " pod="openstack/horizon-5db647897-mx8gj" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.174779 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.175025 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73954232-88c2-4d80-aa50-5352f16b43a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.175458 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.175726 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.177949 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/73954232-88c2-4d80-aa50-5352f16b43a2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.178076 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.182987 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-scripts\") pod \"horizon-5db647897-mx8gj\" (UID: \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\") " pod="openstack/horizon-5db647897-mx8gj" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.178245 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.185918 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.186505 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-horizon-secret-key\") pod \"horizon-5db647897-mx8gj\" (UID: \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\") " pod="openstack/horizon-5db647897-mx8gj" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.175421 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73954232-88c2-4d80-aa50-5352f16b43a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.187304 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-scripts\") pod \"horizon-5db647897-mx8gj\" (UID: \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\") " pod="openstack/horizon-5db647897-mx8gj" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.178337 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb5rb\" (UniqueName: \"kubernetes.io/projected/73954232-88c2-4d80-aa50-5352f16b43a2-kube-api-access-fb5rb\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.187313 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtxf5\" (UniqueName: \"kubernetes.io/projected/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-kube-api-access-vtxf5\") pod \"horizon-5db647897-mx8gj\" (UID: \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\") " pod="openstack/horizon-5db647897-mx8gj" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.187868 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.188553 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.189558 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.203042 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.229998 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.242179 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5db647897-mx8gj" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.316340 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.374306 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.438875 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.549696 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-p5kdd"] Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.754699 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-b6b4-account-create-update-gpt9p"] Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.767940 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5649566987-dn2gq"] Jan 31 04:36:19 crc kubenswrapper[4827]: I0131 04:36:19.938842 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5db647897-mx8gj"] Jan 31 04:36:20 crc kubenswrapper[4827]: I0131 04:36:20.037552 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:36:20 crc kubenswrapper[4827]: W0131 04:36:20.068231 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73954232_88c2_4d80_aa50_5352f16b43a2.slice/crio-618722e304fe5b35ac446d2b97a202dcec9abee17069b60106184a2caa670cb6 WatchSource:0}: Error finding container 618722e304fe5b35ac446d2b97a202dcec9abee17069b60106184a2caa670cb6: Status 404 returned error can't find the container with id 618722e304fe5b35ac446d2b97a202dcec9abee17069b60106184a2caa670cb6 Jan 31 04:36:20 crc kubenswrapper[4827]: I0131 04:36:20.137869 4827 generic.go:334] "Generic (PLEG): container finished" podID="735db063-6703-4f20-9d2f-aa79c7c56855" containerID="31bc0bed8694062a025bdfb49f11e9601127f9c2de18117382dd47fd200bbbc7" exitCode=0 Jan 31 04:36:20 crc kubenswrapper[4827]: I0131 04:36:20.138197 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-p5kdd" event={"ID":"735db063-6703-4f20-9d2f-aa79c7c56855","Type":"ContainerDied","Data":"31bc0bed8694062a025bdfb49f11e9601127f9c2de18117382dd47fd200bbbc7"} Jan 31 04:36:20 crc kubenswrapper[4827]: I0131 04:36:20.138282 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-p5kdd" event={"ID":"735db063-6703-4f20-9d2f-aa79c7c56855","Type":"ContainerStarted","Data":"4d4f077fee6b99c885eab19abf29ddfc35dae8d6d6e40e069d0a9861cc0d31df"} Jan 31 04:36:20 crc kubenswrapper[4827]: I0131 04:36:20.143979 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5649566987-dn2gq" event={"ID":"e44c9e88-212e-4178-a1fb-ce9b1896d73f","Type":"ContainerStarted","Data":"6fed205da24e0880d4c321f18a2729e648c53e838feec3b14d7b511574dc5174"} Jan 31 04:36:20 crc kubenswrapper[4827]: I0131 04:36:20.155041 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"7c359669-c94b-42d4-9b63-de6d4812e598","Type":"ContainerStarted","Data":"1ddfaebcb6a0966edd3009905ee91f449648cf744fc6faf2b40420e37315cda8"} Jan 31 04:36:20 crc kubenswrapper[4827]: I0131 04:36:20.170701 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-b6b4-account-create-update-gpt9p" event={"ID":"b79385e0-f8b6-49d3-a1de-8a61ee7e52b4","Type":"ContainerStarted","Data":"55d975bd045c9caf46b7d6ebcaa1da543793bdb9bc740f0f513b091329f0907a"} Jan 31 04:36:20 crc kubenswrapper[4827]: I0131 04:36:20.175425 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"73954232-88c2-4d80-aa50-5352f16b43a2","Type":"ContainerStarted","Data":"618722e304fe5b35ac446d2b97a202dcec9abee17069b60106184a2caa670cb6"} Jan 31 04:36:20 crc kubenswrapper[4827]: I0131 04:36:20.183715 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"ceec568e-c3e2-4b44-b2f9-b90d9334667f","Type":"ContainerStarted","Data":"df6bf1f504a01afb4e6a911963f8618472491434186c810ebb157674cfff6d67"} Jan 31 04:36:20 crc kubenswrapper[4827]: I0131 04:36:20.189531 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5db647897-mx8gj" event={"ID":"0a8e3a93-e4a4-41c1-b558-88d28e96ef52","Type":"ContainerStarted","Data":"d92e1b7f7071a81484e3172d2eb5e1022c4780e79485813ee779722112d4feb2"} Jan 31 04:36:20 crc kubenswrapper[4827]: I0131 04:36:20.249529 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.211308 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2f208a22-4c88-450c-b37c-882006a3cf68","Type":"ContainerStarted","Data":"f54c722c7f395031130c4775b7afc26dca657e75219f94c27f9a7dfff6e149ff"} Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.214138 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2f208a22-4c88-450c-b37c-882006a3cf68","Type":"ContainerStarted","Data":"b94a3d914fd2a6ac1f6c1d535629732ab83b58c927dcf0446647c30be49d8f50"} Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.214294 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"7c359669-c94b-42d4-9b63-de6d4812e598","Type":"ContainerStarted","Data":"c5476135b9529cebd995e8b54ed93d418956f80b894f19927a7f17a0a7c0dea4"} Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.237112 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"7c359669-c94b-42d4-9b63-de6d4812e598","Type":"ContainerStarted","Data":"693b56aeef80de3ff4ec420dc6cadda9a84e43b15d9ef8ac10561fa6d9565ca1"} Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.246823 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"73954232-88c2-4d80-aa50-5352f16b43a2","Type":"ContainerStarted","Data":"fd1f610d95f925a3fd7d9e31be14b8e1ac8bf0642067d01b9d113830c8a8b11e"} Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.251909 4827 generic.go:334] "Generic (PLEG): container finished" podID="b79385e0-f8b6-49d3-a1de-8a61ee7e52b4" containerID="5441d1f16461b7fafdd58520e7b18918bccd646597dbae9fdcd8b480836d5971" exitCode=0 Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.252040 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-b6b4-account-create-update-gpt9p" event={"ID":"b79385e0-f8b6-49d3-a1de-8a61ee7e52b4","Type":"ContainerDied","Data":"5441d1f16461b7fafdd58520e7b18918bccd646597dbae9fdcd8b480836d5971"} Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.272512 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.32816186 podStartE2EDuration="4.272497019s" podCreationTimestamp="2026-01-31 04:36:17 +0000 UTC" firstStartedPulling="2026-01-31 04:36:19.5166984 +0000 UTC m=+2972.203778849" lastFinishedPulling="2026-01-31 04:36:20.461033559 +0000 UTC m=+2973.148114008" observedRunningTime="2026-01-31 04:36:21.270360643 +0000 UTC m=+2973.957441102" watchObservedRunningTime="2026-01-31 04:36:21.272497019 +0000 UTC m=+2973.959577468" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.296867 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"ceec568e-c3e2-4b44-b2f9-b90d9334667f","Type":"ContainerStarted","Data":"dff1b4230e2dae5aa544e8a6744ab25da855c8afd5ef9e7762b70afe7fec0608"} Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.296918 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"ceec568e-c3e2-4b44-b2f9-b90d9334667f","Type":"ContainerStarted","Data":"2acf6ea2dd01ff76b240e5a87c858b7af9aa5dcfc7fbbe231cf9298724145912"} Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.593890 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.765113511 podStartE2EDuration="4.59386025s" podCreationTimestamp="2026-01-31 04:36:17 +0000 UTC" firstStartedPulling="2026-01-31 04:36:19.138450527 +0000 UTC m=+2971.825530976" lastFinishedPulling="2026-01-31 04:36:19.967197266 +0000 UTC m=+2972.654277715" observedRunningTime="2026-01-31 04:36:21.36559206 +0000 UTC m=+2974.052672529" watchObservedRunningTime="2026-01-31 04:36:21.59386025 +0000 UTC m=+2974.280940699" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.599029 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5649566987-dn2gq"] Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.688185 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-cdd9cb94b-xgkxs"] Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.690656 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.694762 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.811219 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.812286 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e277e6d3-f889-425a-abd6-3344f860bfd9-horizon-tls-certs\") pod \"horizon-cdd9cb94b-xgkxs\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.812320 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-845z9\" (UniqueName: \"kubernetes.io/projected/e277e6d3-f889-425a-abd6-3344f860bfd9-kube-api-access-845z9\") pod \"horizon-cdd9cb94b-xgkxs\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.812367 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e277e6d3-f889-425a-abd6-3344f860bfd9-horizon-secret-key\") pod \"horizon-cdd9cb94b-xgkxs\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.812391 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e277e6d3-f889-425a-abd6-3344f860bfd9-config-data\") pod \"horizon-cdd9cb94b-xgkxs\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.812418 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e277e6d3-f889-425a-abd6-3344f860bfd9-scripts\") pod \"horizon-cdd9cb94b-xgkxs\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.812442 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e277e6d3-f889-425a-abd6-3344f860bfd9-logs\") pod \"horizon-cdd9cb94b-xgkxs\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.812463 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e277e6d3-f889-425a-abd6-3344f860bfd9-combined-ca-bundle\") pod \"horizon-cdd9cb94b-xgkxs\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.846920 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cdd9cb94b-xgkxs"] Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.914648 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e277e6d3-f889-425a-abd6-3344f860bfd9-horizon-tls-certs\") pod \"horizon-cdd9cb94b-xgkxs\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.914699 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-845z9\" (UniqueName: \"kubernetes.io/projected/e277e6d3-f889-425a-abd6-3344f860bfd9-kube-api-access-845z9\") pod \"horizon-cdd9cb94b-xgkxs\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.914749 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e277e6d3-f889-425a-abd6-3344f860bfd9-horizon-secret-key\") pod \"horizon-cdd9cb94b-xgkxs\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.914775 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e277e6d3-f889-425a-abd6-3344f860bfd9-config-data\") pod \"horizon-cdd9cb94b-xgkxs\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.914802 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e277e6d3-f889-425a-abd6-3344f860bfd9-scripts\") pod \"horizon-cdd9cb94b-xgkxs\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.914834 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e277e6d3-f889-425a-abd6-3344f860bfd9-logs\") pod \"horizon-cdd9cb94b-xgkxs\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.914863 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e277e6d3-f889-425a-abd6-3344f860bfd9-combined-ca-bundle\") pod \"horizon-cdd9cb94b-xgkxs\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.915925 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e277e6d3-f889-425a-abd6-3344f860bfd9-logs\") pod \"horizon-cdd9cb94b-xgkxs\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.916095 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e277e6d3-f889-425a-abd6-3344f860bfd9-scripts\") pod \"horizon-cdd9cb94b-xgkxs\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.917022 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e277e6d3-f889-425a-abd6-3344f860bfd9-config-data\") pod \"horizon-cdd9cb94b-xgkxs\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.921403 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e277e6d3-f889-425a-abd6-3344f860bfd9-horizon-tls-certs\") pod \"horizon-cdd9cb94b-xgkxs\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.924151 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e277e6d3-f889-425a-abd6-3344f860bfd9-combined-ca-bundle\") pod \"horizon-cdd9cb94b-xgkxs\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.931838 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5db647897-mx8gj"] Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.934307 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e277e6d3-f889-425a-abd6-3344f860bfd9-horizon-secret-key\") pod \"horizon-cdd9cb94b-xgkxs\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.937336 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-845z9\" (UniqueName: \"kubernetes.io/projected/e277e6d3-f889-425a-abd6-3344f860bfd9-kube-api-access-845z9\") pod \"horizon-cdd9cb94b-xgkxs\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.937378 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.949640 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5dd8c8746d-r25sr"] Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.951063 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:21 crc kubenswrapper[4827]: I0131 04:36:21.962787 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dd8c8746d-r25sr"] Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.036446 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-p5kdd" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.119584 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/735db063-6703-4f20-9d2f-aa79c7c56855-operator-scripts\") pod \"735db063-6703-4f20-9d2f-aa79c7c56855\" (UID: \"735db063-6703-4f20-9d2f-aa79c7c56855\") " Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.119971 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-485x5\" (UniqueName: \"kubernetes.io/projected/735db063-6703-4f20-9d2f-aa79c7c56855-kube-api-access-485x5\") pod \"735db063-6703-4f20-9d2f-aa79c7c56855\" (UID: \"735db063-6703-4f20-9d2f-aa79c7c56855\") " Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.120254 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hvfc\" (UniqueName: \"kubernetes.io/projected/19b64bcf-afad-4b01-8d57-1c1b56bb170f-kube-api-access-2hvfc\") pod \"horizon-5dd8c8746d-r25sr\" (UID: \"19b64bcf-afad-4b01-8d57-1c1b56bb170f\") " pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.120264 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/735db063-6703-4f20-9d2f-aa79c7c56855-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "735db063-6703-4f20-9d2f-aa79c7c56855" (UID: "735db063-6703-4f20-9d2f-aa79c7c56855"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.120314 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19b64bcf-afad-4b01-8d57-1c1b56bb170f-horizon-secret-key\") pod \"horizon-5dd8c8746d-r25sr\" (UID: \"19b64bcf-afad-4b01-8d57-1c1b56bb170f\") " pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.120341 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b64bcf-afad-4b01-8d57-1c1b56bb170f-combined-ca-bundle\") pod \"horizon-5dd8c8746d-r25sr\" (UID: \"19b64bcf-afad-4b01-8d57-1c1b56bb170f\") " pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.120389 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/19b64bcf-afad-4b01-8d57-1c1b56bb170f-horizon-tls-certs\") pod \"horizon-5dd8c8746d-r25sr\" (UID: \"19b64bcf-afad-4b01-8d57-1c1b56bb170f\") " pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.120428 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19b64bcf-afad-4b01-8d57-1c1b56bb170f-logs\") pod \"horizon-5dd8c8746d-r25sr\" (UID: \"19b64bcf-afad-4b01-8d57-1c1b56bb170f\") " pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.120450 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19b64bcf-afad-4b01-8d57-1c1b56bb170f-scripts\") pod \"horizon-5dd8c8746d-r25sr\" (UID: \"19b64bcf-afad-4b01-8d57-1c1b56bb170f\") " pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.120486 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19b64bcf-afad-4b01-8d57-1c1b56bb170f-config-data\") pod \"horizon-5dd8c8746d-r25sr\" (UID: \"19b64bcf-afad-4b01-8d57-1c1b56bb170f\") " pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.120560 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/735db063-6703-4f20-9d2f-aa79c7c56855-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.121062 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.132245 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/735db063-6703-4f20-9d2f-aa79c7c56855-kube-api-access-485x5" (OuterVolumeSpecName: "kube-api-access-485x5") pod "735db063-6703-4f20-9d2f-aa79c7c56855" (UID: "735db063-6703-4f20-9d2f-aa79c7c56855"). InnerVolumeSpecName "kube-api-access-485x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.222912 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hvfc\" (UniqueName: \"kubernetes.io/projected/19b64bcf-afad-4b01-8d57-1c1b56bb170f-kube-api-access-2hvfc\") pod \"horizon-5dd8c8746d-r25sr\" (UID: \"19b64bcf-afad-4b01-8d57-1c1b56bb170f\") " pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.222994 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19b64bcf-afad-4b01-8d57-1c1b56bb170f-horizon-secret-key\") pod \"horizon-5dd8c8746d-r25sr\" (UID: \"19b64bcf-afad-4b01-8d57-1c1b56bb170f\") " pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.223049 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b64bcf-afad-4b01-8d57-1c1b56bb170f-combined-ca-bundle\") pod \"horizon-5dd8c8746d-r25sr\" (UID: \"19b64bcf-afad-4b01-8d57-1c1b56bb170f\") " pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.223147 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/19b64bcf-afad-4b01-8d57-1c1b56bb170f-horizon-tls-certs\") pod \"horizon-5dd8c8746d-r25sr\" (UID: \"19b64bcf-afad-4b01-8d57-1c1b56bb170f\") " pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.223219 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19b64bcf-afad-4b01-8d57-1c1b56bb170f-logs\") pod \"horizon-5dd8c8746d-r25sr\" (UID: \"19b64bcf-afad-4b01-8d57-1c1b56bb170f\") " pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.223261 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19b64bcf-afad-4b01-8d57-1c1b56bb170f-scripts\") pod \"horizon-5dd8c8746d-r25sr\" (UID: \"19b64bcf-afad-4b01-8d57-1c1b56bb170f\") " pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.223351 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19b64bcf-afad-4b01-8d57-1c1b56bb170f-config-data\") pod \"horizon-5dd8c8746d-r25sr\" (UID: \"19b64bcf-afad-4b01-8d57-1c1b56bb170f\") " pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.224196 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-485x5\" (UniqueName: \"kubernetes.io/projected/735db063-6703-4f20-9d2f-aa79c7c56855-kube-api-access-485x5\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.226839 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19b64bcf-afad-4b01-8d57-1c1b56bb170f-config-data\") pod \"horizon-5dd8c8746d-r25sr\" (UID: \"19b64bcf-afad-4b01-8d57-1c1b56bb170f\") " pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.234172 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19b64bcf-afad-4b01-8d57-1c1b56bb170f-logs\") pod \"horizon-5dd8c8746d-r25sr\" (UID: \"19b64bcf-afad-4b01-8d57-1c1b56bb170f\") " pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.236292 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19b64bcf-afad-4b01-8d57-1c1b56bb170f-scripts\") pod \"horizon-5dd8c8746d-r25sr\" (UID: \"19b64bcf-afad-4b01-8d57-1c1b56bb170f\") " pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.243936 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/19b64bcf-afad-4b01-8d57-1c1b56bb170f-horizon-tls-certs\") pod \"horizon-5dd8c8746d-r25sr\" (UID: \"19b64bcf-afad-4b01-8d57-1c1b56bb170f\") " pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.253633 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b64bcf-afad-4b01-8d57-1c1b56bb170f-combined-ca-bundle\") pod \"horizon-5dd8c8746d-r25sr\" (UID: \"19b64bcf-afad-4b01-8d57-1c1b56bb170f\") " pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.268978 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19b64bcf-afad-4b01-8d57-1c1b56bb170f-horizon-secret-key\") pod \"horizon-5dd8c8746d-r25sr\" (UID: \"19b64bcf-afad-4b01-8d57-1c1b56bb170f\") " pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.274771 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hvfc\" (UniqueName: \"kubernetes.io/projected/19b64bcf-afad-4b01-8d57-1c1b56bb170f-kube-api-access-2hvfc\") pod \"horizon-5dd8c8746d-r25sr\" (UID: \"19b64bcf-afad-4b01-8d57-1c1b56bb170f\") " pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.334298 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"73954232-88c2-4d80-aa50-5352f16b43a2","Type":"ContainerStarted","Data":"81f262b14526ec4b335befdf8a219c83df7a28f9569ea2b91a2f49f227019aef"} Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.334657 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="73954232-88c2-4d80-aa50-5352f16b43a2" containerName="glance-log" containerID="cri-o://fd1f610d95f925a3fd7d9e31be14b8e1ac8bf0642067d01b9d113830c8a8b11e" gracePeriod=30 Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.334738 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="73954232-88c2-4d80-aa50-5352f16b43a2" containerName="glance-httpd" containerID="cri-o://81f262b14526ec4b335befdf8a219c83df7a28f9569ea2b91a2f49f227019aef" gracePeriod=30 Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.353226 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.370763 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.370745251 podStartE2EDuration="4.370745251s" podCreationTimestamp="2026-01-31 04:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:36:22.36879257 +0000 UTC m=+2975.055873019" watchObservedRunningTime="2026-01-31 04:36:22.370745251 +0000 UTC m=+2975.057825690" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.388405 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2f208a22-4c88-450c-b37c-882006a3cf68" containerName="glance-log" containerID="cri-o://f54c722c7f395031130c4775b7afc26dca657e75219f94c27f9a7dfff6e149ff" gracePeriod=30 Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.389146 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2f208a22-4c88-450c-b37c-882006a3cf68" containerName="glance-httpd" containerID="cri-o://6fd118e08382ff8a14d560154bc67f435fb5b34489eb5b9b6bf26f808445308f" gracePeriod=30 Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.389385 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2f208a22-4c88-450c-b37c-882006a3cf68","Type":"ContainerStarted","Data":"6fd118e08382ff8a14d560154bc67f435fb5b34489eb5b9b6bf26f808445308f"} Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.395091 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-p5kdd" event={"ID":"735db063-6703-4f20-9d2f-aa79c7c56855","Type":"ContainerDied","Data":"4d4f077fee6b99c885eab19abf29ddfc35dae8d6d6e40e069d0a9861cc0d31df"} Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.395155 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d4f077fee6b99c885eab19abf29ddfc35dae8d6d6e40e069d0a9861cc0d31df" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.395223 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-p5kdd" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.426914 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.42689289 podStartE2EDuration="4.42689289s" podCreationTimestamp="2026-01-31 04:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:36:22.417494912 +0000 UTC m=+2975.104575361" watchObservedRunningTime="2026-01-31 04:36:22.42689289 +0000 UTC m=+2975.113973339" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.650568 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cdd9cb94b-xgkxs"] Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.776508 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-b6b4-account-create-update-gpt9p" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.939498 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lhcc\" (UniqueName: \"kubernetes.io/projected/b79385e0-f8b6-49d3-a1de-8a61ee7e52b4-kube-api-access-9lhcc\") pod \"b79385e0-f8b6-49d3-a1de-8a61ee7e52b4\" (UID: \"b79385e0-f8b6-49d3-a1de-8a61ee7e52b4\") " Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.939776 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b79385e0-f8b6-49d3-a1de-8a61ee7e52b4-operator-scripts\") pod \"b79385e0-f8b6-49d3-a1de-8a61ee7e52b4\" (UID: \"b79385e0-f8b6-49d3-a1de-8a61ee7e52b4\") " Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.941487 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b79385e0-f8b6-49d3-a1de-8a61ee7e52b4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b79385e0-f8b6-49d3-a1de-8a61ee7e52b4" (UID: "b79385e0-f8b6-49d3-a1de-8a61ee7e52b4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:36:22 crc kubenswrapper[4827]: I0131 04:36:22.950061 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b79385e0-f8b6-49d3-a1de-8a61ee7e52b4-kube-api-access-9lhcc" (OuterVolumeSpecName: "kube-api-access-9lhcc") pod "b79385e0-f8b6-49d3-a1de-8a61ee7e52b4" (UID: "b79385e0-f8b6-49d3-a1de-8a61ee7e52b4"). InnerVolumeSpecName "kube-api-access-9lhcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.041736 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lhcc\" (UniqueName: \"kubernetes.io/projected/b79385e0-f8b6-49d3-a1de-8a61ee7e52b4-kube-api-access-9lhcc\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.042053 4827 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b79385e0-f8b6-49d3-a1de-8a61ee7e52b4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.055844 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dd8c8746d-r25sr"] Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.090396 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.245097 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-internal-tls-certs\") pod \"73954232-88c2-4d80-aa50-5352f16b43a2\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.245163 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-config-data\") pod \"73954232-88c2-4d80-aa50-5352f16b43a2\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.245194 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/73954232-88c2-4d80-aa50-5352f16b43a2-ceph\") pod \"73954232-88c2-4d80-aa50-5352f16b43a2\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.245226 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-scripts\") pod \"73954232-88c2-4d80-aa50-5352f16b43a2\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.245249 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-combined-ca-bundle\") pod \"73954232-88c2-4d80-aa50-5352f16b43a2\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.245304 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73954232-88c2-4d80-aa50-5352f16b43a2-logs\") pod \"73954232-88c2-4d80-aa50-5352f16b43a2\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.245326 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73954232-88c2-4d80-aa50-5352f16b43a2-httpd-run\") pod \"73954232-88c2-4d80-aa50-5352f16b43a2\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.245401 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb5rb\" (UniqueName: \"kubernetes.io/projected/73954232-88c2-4d80-aa50-5352f16b43a2-kube-api-access-fb5rb\") pod \"73954232-88c2-4d80-aa50-5352f16b43a2\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.245416 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"73954232-88c2-4d80-aa50-5352f16b43a2\" (UID: \"73954232-88c2-4d80-aa50-5352f16b43a2\") " Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.246479 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73954232-88c2-4d80-aa50-5352f16b43a2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "73954232-88c2-4d80-aa50-5352f16b43a2" (UID: "73954232-88c2-4d80-aa50-5352f16b43a2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.249435 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73954232-88c2-4d80-aa50-5352f16b43a2-logs" (OuterVolumeSpecName: "logs") pod "73954232-88c2-4d80-aa50-5352f16b43a2" (UID: "73954232-88c2-4d80-aa50-5352f16b43a2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.252080 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-scripts" (OuterVolumeSpecName: "scripts") pod "73954232-88c2-4d80-aa50-5352f16b43a2" (UID: "73954232-88c2-4d80-aa50-5352f16b43a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.258102 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73954232-88c2-4d80-aa50-5352f16b43a2-kube-api-access-fb5rb" (OuterVolumeSpecName: "kube-api-access-fb5rb") pod "73954232-88c2-4d80-aa50-5352f16b43a2" (UID: "73954232-88c2-4d80-aa50-5352f16b43a2"). InnerVolumeSpecName "kube-api-access-fb5rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.260825 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73954232-88c2-4d80-aa50-5352f16b43a2-ceph" (OuterVolumeSpecName: "ceph") pod "73954232-88c2-4d80-aa50-5352f16b43a2" (UID: "73954232-88c2-4d80-aa50-5352f16b43a2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.266404 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "73954232-88c2-4d80-aa50-5352f16b43a2" (UID: "73954232-88c2-4d80-aa50-5352f16b43a2"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.283441 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.289905 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73954232-88c2-4d80-aa50-5352f16b43a2" (UID: "73954232-88c2-4d80-aa50-5352f16b43a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.300046 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.312813 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "73954232-88c2-4d80-aa50-5352f16b43a2" (UID: "73954232-88c2-4d80-aa50-5352f16b43a2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.335453 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-config-data" (OuterVolumeSpecName: "config-data") pod "73954232-88c2-4d80-aa50-5352f16b43a2" (UID: "73954232-88c2-4d80-aa50-5352f16b43a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.348605 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb5rb\" (UniqueName: \"kubernetes.io/projected/73954232-88c2-4d80-aa50-5352f16b43a2-kube-api-access-fb5rb\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.348662 4827 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.348673 4827 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.348683 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.348693 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/73954232-88c2-4d80-aa50-5352f16b43a2-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.348702 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.348710 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73954232-88c2-4d80-aa50-5352f16b43a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.348721 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73954232-88c2-4d80-aa50-5352f16b43a2-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.348729 4827 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73954232-88c2-4d80-aa50-5352f16b43a2-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.367971 4827 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.404541 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cdd9cb94b-xgkxs" event={"ID":"e277e6d3-f889-425a-abd6-3344f860bfd9","Type":"ContainerStarted","Data":"1a7c8377c57db3e9e78ec3820e800e7c81894e798171b227f8dbfce901970d74"} Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.405959 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dd8c8746d-r25sr" event={"ID":"19b64bcf-afad-4b01-8d57-1c1b56bb170f","Type":"ContainerStarted","Data":"b7269df72405105555ac4a652511673c3c279b9a3d42b9aee4dbdfb9b9cb06c3"} Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.408350 4827 generic.go:334] "Generic (PLEG): container finished" podID="2f208a22-4c88-450c-b37c-882006a3cf68" containerID="6fd118e08382ff8a14d560154bc67f435fb5b34489eb5b9b6bf26f808445308f" exitCode=143 Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.408384 4827 generic.go:334] "Generic (PLEG): container finished" podID="2f208a22-4c88-450c-b37c-882006a3cf68" containerID="f54c722c7f395031130c4775b7afc26dca657e75219f94c27f9a7dfff6e149ff" exitCode=143 Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.408423 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2f208a22-4c88-450c-b37c-882006a3cf68","Type":"ContainerDied","Data":"6fd118e08382ff8a14d560154bc67f435fb5b34489eb5b9b6bf26f808445308f"} Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.408447 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2f208a22-4c88-450c-b37c-882006a3cf68","Type":"ContainerDied","Data":"f54c722c7f395031130c4775b7afc26dca657e75219f94c27f9a7dfff6e149ff"} Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.412958 4827 generic.go:334] "Generic (PLEG): container finished" podID="73954232-88c2-4d80-aa50-5352f16b43a2" containerID="81f262b14526ec4b335befdf8a219c83df7a28f9569ea2b91a2f49f227019aef" exitCode=143 Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.412977 4827 generic.go:334] "Generic (PLEG): container finished" podID="73954232-88c2-4d80-aa50-5352f16b43a2" containerID="fd1f610d95f925a3fd7d9e31be14b8e1ac8bf0642067d01b9d113830c8a8b11e" exitCode=143 Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.413039 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"73954232-88c2-4d80-aa50-5352f16b43a2","Type":"ContainerDied","Data":"81f262b14526ec4b335befdf8a219c83df7a28f9569ea2b91a2f49f227019aef"} Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.413095 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"73954232-88c2-4d80-aa50-5352f16b43a2","Type":"ContainerDied","Data":"fd1f610d95f925a3fd7d9e31be14b8e1ac8bf0642067d01b9d113830c8a8b11e"} Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.413040 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.413130 4827 scope.go:117] "RemoveContainer" containerID="81f262b14526ec4b335befdf8a219c83df7a28f9569ea2b91a2f49f227019aef" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.413120 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"73954232-88c2-4d80-aa50-5352f16b43a2","Type":"ContainerDied","Data":"618722e304fe5b35ac446d2b97a202dcec9abee17069b60106184a2caa670cb6"} Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.417020 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-b6b4-account-create-update-gpt9p" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.417095 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-b6b4-account-create-update-gpt9p" event={"ID":"b79385e0-f8b6-49d3-a1de-8a61ee7e52b4","Type":"ContainerDied","Data":"55d975bd045c9caf46b7d6ebcaa1da543793bdb9bc740f0f513b091329f0907a"} Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.417122 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55d975bd045c9caf46b7d6ebcaa1da543793bdb9bc740f0f513b091329f0907a" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.449991 4827 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.460232 4827 scope.go:117] "RemoveContainer" containerID="fd1f610d95f925a3fd7d9e31be14b8e1ac8bf0642067d01b9d113830c8a8b11e" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.468080 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.480629 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.493479 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:36:23 crc kubenswrapper[4827]: E0131 04:36:23.494124 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735db063-6703-4f20-9d2f-aa79c7c56855" containerName="mariadb-database-create" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.494143 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="735db063-6703-4f20-9d2f-aa79c7c56855" containerName="mariadb-database-create" Jan 31 04:36:23 crc kubenswrapper[4827]: E0131 04:36:23.494160 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73954232-88c2-4d80-aa50-5352f16b43a2" containerName="glance-log" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.494166 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="73954232-88c2-4d80-aa50-5352f16b43a2" containerName="glance-log" Jan 31 04:36:23 crc kubenswrapper[4827]: E0131 04:36:23.494196 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b79385e0-f8b6-49d3-a1de-8a61ee7e52b4" containerName="mariadb-account-create-update" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.494203 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b79385e0-f8b6-49d3-a1de-8a61ee7e52b4" containerName="mariadb-account-create-update" Jan 31 04:36:23 crc kubenswrapper[4827]: E0131 04:36:23.494217 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73954232-88c2-4d80-aa50-5352f16b43a2" containerName="glance-httpd" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.494222 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="73954232-88c2-4d80-aa50-5352f16b43a2" containerName="glance-httpd" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.494384 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="b79385e0-f8b6-49d3-a1de-8a61ee7e52b4" containerName="mariadb-account-create-update" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.494401 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="73954232-88c2-4d80-aa50-5352f16b43a2" containerName="glance-log" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.494418 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="73954232-88c2-4d80-aa50-5352f16b43a2" containerName="glance-httpd" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.494429 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="735db063-6703-4f20-9d2f-aa79c7c56855" containerName="mariadb-database-create" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.495330 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.496862 4827 scope.go:117] "RemoveContainer" containerID="81f262b14526ec4b335befdf8a219c83df7a28f9569ea2b91a2f49f227019aef" Jan 31 04:36:23 crc kubenswrapper[4827]: E0131 04:36:23.497419 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81f262b14526ec4b335befdf8a219c83df7a28f9569ea2b91a2f49f227019aef\": container with ID starting with 81f262b14526ec4b335befdf8a219c83df7a28f9569ea2b91a2f49f227019aef not found: ID does not exist" containerID="81f262b14526ec4b335befdf8a219c83df7a28f9569ea2b91a2f49f227019aef" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.497452 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81f262b14526ec4b335befdf8a219c83df7a28f9569ea2b91a2f49f227019aef"} err="failed to get container status \"81f262b14526ec4b335befdf8a219c83df7a28f9569ea2b91a2f49f227019aef\": rpc error: code = NotFound desc = could not find container \"81f262b14526ec4b335befdf8a219c83df7a28f9569ea2b91a2f49f227019aef\": container with ID starting with 81f262b14526ec4b335befdf8a219c83df7a28f9569ea2b91a2f49f227019aef not found: ID does not exist" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.497475 4827 scope.go:117] "RemoveContainer" containerID="fd1f610d95f925a3fd7d9e31be14b8e1ac8bf0642067d01b9d113830c8a8b11e" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.497652 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.497965 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 04:36:23 crc kubenswrapper[4827]: E0131 04:36:23.504178 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd1f610d95f925a3fd7d9e31be14b8e1ac8bf0642067d01b9d113830c8a8b11e\": container with ID starting with fd1f610d95f925a3fd7d9e31be14b8e1ac8bf0642067d01b9d113830c8a8b11e not found: ID does not exist" containerID="fd1f610d95f925a3fd7d9e31be14b8e1ac8bf0642067d01b9d113830c8a8b11e" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.504211 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd1f610d95f925a3fd7d9e31be14b8e1ac8bf0642067d01b9d113830c8a8b11e"} err="failed to get container status \"fd1f610d95f925a3fd7d9e31be14b8e1ac8bf0642067d01b9d113830c8a8b11e\": rpc error: code = NotFound desc = could not find container \"fd1f610d95f925a3fd7d9e31be14b8e1ac8bf0642067d01b9d113830c8a8b11e\": container with ID starting with fd1f610d95f925a3fd7d9e31be14b8e1ac8bf0642067d01b9d113830c8a8b11e not found: ID does not exist" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.504241 4827 scope.go:117] "RemoveContainer" containerID="81f262b14526ec4b335befdf8a219c83df7a28f9569ea2b91a2f49f227019aef" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.505569 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81f262b14526ec4b335befdf8a219c83df7a28f9569ea2b91a2f49f227019aef"} err="failed to get container status \"81f262b14526ec4b335befdf8a219c83df7a28f9569ea2b91a2f49f227019aef\": rpc error: code = NotFound desc = could not find container \"81f262b14526ec4b335befdf8a219c83df7a28f9569ea2b91a2f49f227019aef\": container with ID starting with 81f262b14526ec4b335befdf8a219c83df7a28f9569ea2b91a2f49f227019aef not found: ID does not exist" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.505621 4827 scope.go:117] "RemoveContainer" containerID="fd1f610d95f925a3fd7d9e31be14b8e1ac8bf0642067d01b9d113830c8a8b11e" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.508220 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd1f610d95f925a3fd7d9e31be14b8e1ac8bf0642067d01b9d113830c8a8b11e"} err="failed to get container status \"fd1f610d95f925a3fd7d9e31be14b8e1ac8bf0642067d01b9d113830c8a8b11e\": rpc error: code = NotFound desc = could not find container \"fd1f610d95f925a3fd7d9e31be14b8e1ac8bf0642067d01b9d113830c8a8b11e\": container with ID starting with fd1f610d95f925a3fd7d9e31be14b8e1ac8bf0642067d01b9d113830c8a8b11e not found: ID does not exist" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.511491 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.656697 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/07de9274-858d-4f45-bb4e-f064e62260c8-ceph\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.656785 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07de9274-858d-4f45-bb4e-f064e62260c8-logs\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.656863 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.657150 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07de9274-858d-4f45-bb4e-f064e62260c8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.657209 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07de9274-858d-4f45-bb4e-f064e62260c8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.657273 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07de9274-858d-4f45-bb4e-f064e62260c8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.657491 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07de9274-858d-4f45-bb4e-f064e62260c8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.657585 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07de9274-858d-4f45-bb4e-f064e62260c8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.657675 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wknwh\" (UniqueName: \"kubernetes.io/projected/07de9274-858d-4f45-bb4e-f064e62260c8-kube-api-access-wknwh\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.759442 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07de9274-858d-4f45-bb4e-f064e62260c8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.759494 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wknwh\" (UniqueName: \"kubernetes.io/projected/07de9274-858d-4f45-bb4e-f064e62260c8-kube-api-access-wknwh\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.759542 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/07de9274-858d-4f45-bb4e-f064e62260c8-ceph\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.759562 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07de9274-858d-4f45-bb4e-f064e62260c8-logs\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.759595 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.759689 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07de9274-858d-4f45-bb4e-f064e62260c8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.759704 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07de9274-858d-4f45-bb4e-f064e62260c8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.759731 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07de9274-858d-4f45-bb4e-f064e62260c8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.759767 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07de9274-858d-4f45-bb4e-f064e62260c8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.760303 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07de9274-858d-4f45-bb4e-f064e62260c8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.760439 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.764420 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07de9274-858d-4f45-bb4e-f064e62260c8-logs\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.765381 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07de9274-858d-4f45-bb4e-f064e62260c8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.768147 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07de9274-858d-4f45-bb4e-f064e62260c8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.770211 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/07de9274-858d-4f45-bb4e-f064e62260c8-ceph\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.770763 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07de9274-858d-4f45-bb4e-f064e62260c8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.781376 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07de9274-858d-4f45-bb4e-f064e62260c8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.787316 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wknwh\" (UniqueName: \"kubernetes.io/projected/07de9274-858d-4f45-bb4e-f064e62260c8-kube-api-access-wknwh\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.817967 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"07de9274-858d-4f45-bb4e-f064e62260c8\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.871528 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 04:36:23 crc kubenswrapper[4827]: I0131 04:36:23.984138 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.063973 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2f208a22-4c88-450c-b37c-882006a3cf68-ceph\") pod \"2f208a22-4c88-450c-b37c-882006a3cf68\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.064055 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-combined-ca-bundle\") pod \"2f208a22-4c88-450c-b37c-882006a3cf68\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.064085 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f208a22-4c88-450c-b37c-882006a3cf68-logs\") pod \"2f208a22-4c88-450c-b37c-882006a3cf68\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.064139 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4d52\" (UniqueName: \"kubernetes.io/projected/2f208a22-4c88-450c-b37c-882006a3cf68-kube-api-access-k4d52\") pod \"2f208a22-4c88-450c-b37c-882006a3cf68\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.064173 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"2f208a22-4c88-450c-b37c-882006a3cf68\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.064237 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2f208a22-4c88-450c-b37c-882006a3cf68-httpd-run\") pod \"2f208a22-4c88-450c-b37c-882006a3cf68\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.064268 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-config-data\") pod \"2f208a22-4c88-450c-b37c-882006a3cf68\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.064308 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-scripts\") pod \"2f208a22-4c88-450c-b37c-882006a3cf68\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.064378 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-public-tls-certs\") pod \"2f208a22-4c88-450c-b37c-882006a3cf68\" (UID: \"2f208a22-4c88-450c-b37c-882006a3cf68\") " Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.067456 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f208a22-4c88-450c-b37c-882006a3cf68-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2f208a22-4c88-450c-b37c-882006a3cf68" (UID: "2f208a22-4c88-450c-b37c-882006a3cf68"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.067564 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f208a22-4c88-450c-b37c-882006a3cf68-logs" (OuterVolumeSpecName: "logs") pod "2f208a22-4c88-450c-b37c-882006a3cf68" (UID: "2f208a22-4c88-450c-b37c-882006a3cf68"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.074506 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-scripts" (OuterVolumeSpecName: "scripts") pod "2f208a22-4c88-450c-b37c-882006a3cf68" (UID: "2f208a22-4c88-450c-b37c-882006a3cf68"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.074616 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f208a22-4c88-450c-b37c-882006a3cf68-ceph" (OuterVolumeSpecName: "ceph") pod "2f208a22-4c88-450c-b37c-882006a3cf68" (UID: "2f208a22-4c88-450c-b37c-882006a3cf68"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.075554 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f208a22-4c88-450c-b37c-882006a3cf68-kube-api-access-k4d52" (OuterVolumeSpecName: "kube-api-access-k4d52") pod "2f208a22-4c88-450c-b37c-882006a3cf68" (UID: "2f208a22-4c88-450c-b37c-882006a3cf68"). InnerVolumeSpecName "kube-api-access-k4d52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.082786 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "2f208a22-4c88-450c-b37c-882006a3cf68" (UID: "2f208a22-4c88-450c-b37c-882006a3cf68"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.173088 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2f208a22-4c88-450c-b37c-882006a3cf68-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.173122 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f208a22-4c88-450c-b37c-882006a3cf68-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.173130 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4d52\" (UniqueName: \"kubernetes.io/projected/2f208a22-4c88-450c-b37c-882006a3cf68-kube-api-access-k4d52\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.173151 4827 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.173160 4827 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2f208a22-4c88-450c-b37c-882006a3cf68-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.173168 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.227600 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73954232-88c2-4d80-aa50-5352f16b43a2" path="/var/lib/kubelet/pods/73954232-88c2-4d80-aa50-5352f16b43a2/volumes" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.242010 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-config-data" (OuterVolumeSpecName: "config-data") pod "2f208a22-4c88-450c-b37c-882006a3cf68" (UID: "2f208a22-4c88-450c-b37c-882006a3cf68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.275069 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.289782 4827 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.319994 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f208a22-4c88-450c-b37c-882006a3cf68" (UID: "2f208a22-4c88-450c-b37c-882006a3cf68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.351022 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2f208a22-4c88-450c-b37c-882006a3cf68" (UID: "2f208a22-4c88-450c-b37c-882006a3cf68"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.376654 4827 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.376690 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f208a22-4c88-450c-b37c-882006a3cf68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.376701 4827 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.435938 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.436010 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2f208a22-4c88-450c-b37c-882006a3cf68","Type":"ContainerDied","Data":"b94a3d914fd2a6ac1f6c1d535629732ab83b58c927dcf0446647c30be49d8f50"} Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.436061 4827 scope.go:117] "RemoveContainer" containerID="6fd118e08382ff8a14d560154bc67f435fb5b34489eb5b9b6bf26f808445308f" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.485457 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.496188 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.509851 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:36:24 crc kubenswrapper[4827]: E0131 04:36:24.510269 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f208a22-4c88-450c-b37c-882006a3cf68" containerName="glance-httpd" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.510281 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f208a22-4c88-450c-b37c-882006a3cf68" containerName="glance-httpd" Jan 31 04:36:24 crc kubenswrapper[4827]: E0131 04:36:24.510306 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f208a22-4c88-450c-b37c-882006a3cf68" containerName="glance-log" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.510311 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f208a22-4c88-450c-b37c-882006a3cf68" containerName="glance-log" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.510468 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f208a22-4c88-450c-b37c-882006a3cf68" containerName="glance-httpd" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.510485 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f208a22-4c88-450c-b37c-882006a3cf68" containerName="glance-log" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.511534 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.517461 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.519759 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.520034 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.567559 4827 scope.go:117] "RemoveContainer" containerID="f54c722c7f395031130c4775b7afc26dca657e75219f94c27f9a7dfff6e149ff" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.580027 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c51b9a5d-c011-422c-8b29-39a9d4355659-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.580092 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c51b9a5d-c011-422c-8b29-39a9d4355659-scripts\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.580117 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c51b9a5d-c011-422c-8b29-39a9d4355659-ceph\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.580324 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c51b9a5d-c011-422c-8b29-39a9d4355659-logs\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.580484 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zspx5\" (UniqueName: \"kubernetes.io/projected/c51b9a5d-c011-422c-8b29-39a9d4355659-kube-api-access-zspx5\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.580551 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.580600 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51b9a5d-c011-422c-8b29-39a9d4355659-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.580633 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c51b9a5d-c011-422c-8b29-39a9d4355659-config-data\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.580672 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c51b9a5d-c011-422c-8b29-39a9d4355659-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.629398 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:36:24 crc kubenswrapper[4827]: W0131 04:36:24.659695 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07de9274_858d_4f45_bb4e_f064e62260c8.slice/crio-a9da9191677176ea09592d3df72d9cc55b2e56a1b691710bcd89516790ff9518 WatchSource:0}: Error finding container a9da9191677176ea09592d3df72d9cc55b2e56a1b691710bcd89516790ff9518: Status 404 returned error can't find the container with id a9da9191677176ea09592d3df72d9cc55b2e56a1b691710bcd89516790ff9518 Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.683148 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c51b9a5d-c011-422c-8b29-39a9d4355659-scripts\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.683197 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c51b9a5d-c011-422c-8b29-39a9d4355659-ceph\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.683251 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c51b9a5d-c011-422c-8b29-39a9d4355659-logs\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.683488 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zspx5\" (UniqueName: \"kubernetes.io/projected/c51b9a5d-c011-422c-8b29-39a9d4355659-kube-api-access-zspx5\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.683519 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.683545 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51b9a5d-c011-422c-8b29-39a9d4355659-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.683564 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c51b9a5d-c011-422c-8b29-39a9d4355659-config-data\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.683591 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c51b9a5d-c011-422c-8b29-39a9d4355659-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.683644 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c51b9a5d-c011-422c-8b29-39a9d4355659-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.684934 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c51b9a5d-c011-422c-8b29-39a9d4355659-logs\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.685005 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.685624 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c51b9a5d-c011-422c-8b29-39a9d4355659-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.693512 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c51b9a5d-c011-422c-8b29-39a9d4355659-ceph\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.694163 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c51b9a5d-c011-422c-8b29-39a9d4355659-scripts\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.695013 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51b9a5d-c011-422c-8b29-39a9d4355659-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.695670 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c51b9a5d-c011-422c-8b29-39a9d4355659-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.699305 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c51b9a5d-c011-422c-8b29-39a9d4355659-config-data\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.709990 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zspx5\" (UniqueName: \"kubernetes.io/projected/c51b9a5d-c011-422c-8b29-39a9d4355659-kube-api-access-zspx5\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.718909 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c51b9a5d-c011-422c-8b29-39a9d4355659\") " pod="openstack/glance-default-external-api-0" Jan 31 04:36:24 crc kubenswrapper[4827]: I0131 04:36:24.837685 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 04:36:25 crc kubenswrapper[4827]: I0131 04:36:25.470625 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"07de9274-858d-4f45-bb4e-f064e62260c8","Type":"ContainerStarted","Data":"92d9ee36cd7313a0467f990f9b1868f7a940a5ee2ad4b5175c9f3c5aae11446a"} Jan 31 04:36:25 crc kubenswrapper[4827]: I0131 04:36:25.471170 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"07de9274-858d-4f45-bb4e-f064e62260c8","Type":"ContainerStarted","Data":"a9da9191677176ea09592d3df72d9cc55b2e56a1b691710bcd89516790ff9518"} Jan 31 04:36:25 crc kubenswrapper[4827]: I0131 04:36:25.472167 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:36:26 crc kubenswrapper[4827]: I0131 04:36:26.126070 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f208a22-4c88-450c-b37c-882006a3cf68" path="/var/lib/kubelet/pods/2f208a22-4c88-450c-b37c-882006a3cf68/volumes" Jan 31 04:36:26 crc kubenswrapper[4827]: I0131 04:36:26.504982 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"07de9274-858d-4f45-bb4e-f064e62260c8","Type":"ContainerStarted","Data":"13952c135bf490018ca069bb20c8c766e1c5281571b1680a0a922a323a5f7f5e"} Jan 31 04:36:26 crc kubenswrapper[4827]: I0131 04:36:26.513442 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c51b9a5d-c011-422c-8b29-39a9d4355659","Type":"ContainerStarted","Data":"480f797bfff0dea97f6b205668a0ee6d73aa688a9b4842eb61c20ea7804e4a9e"} Jan 31 04:36:26 crc kubenswrapper[4827]: I0131 04:36:26.513487 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c51b9a5d-c011-422c-8b29-39a9d4355659","Type":"ContainerStarted","Data":"b50af5f37e293535ca39da5ed00e40cfd24e790c54392f7affa38ca57c8c3536"} Jan 31 04:36:26 crc kubenswrapper[4827]: I0131 04:36:26.532109 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.532088235 podStartE2EDuration="3.532088235s" podCreationTimestamp="2026-01-31 04:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:36:26.524388399 +0000 UTC m=+2979.211468838" watchObservedRunningTime="2026-01-31 04:36:26.532088235 +0000 UTC m=+2979.219168684" Jan 31 04:36:27 crc kubenswrapper[4827]: I0131 04:36:27.109660 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:36:27 crc kubenswrapper[4827]: E0131 04:36:27.110113 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:36:27 crc kubenswrapper[4827]: I0131 04:36:27.528166 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c51b9a5d-c011-422c-8b29-39a9d4355659","Type":"ContainerStarted","Data":"d85981429d11f7612df3ad845715f07333e0fbd445ad5daffde1af9adfeb5311"} Jan 31 04:36:27 crc kubenswrapper[4827]: I0131 04:36:27.561763 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.561735145 podStartE2EDuration="3.561735145s" podCreationTimestamp="2026-01-31 04:36:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:36:27.551601976 +0000 UTC m=+2980.238682425" watchObservedRunningTime="2026-01-31 04:36:27.561735145 +0000 UTC m=+2980.248815594" Jan 31 04:36:28 crc kubenswrapper[4827]: I0131 04:36:28.491975 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 31 04:36:28 crc kubenswrapper[4827]: I0131 04:36:28.541592 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 31 04:36:28 crc kubenswrapper[4827]: I0131 04:36:28.925212 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-9c5ww"] Jan 31 04:36:28 crc kubenswrapper[4827]: I0131 04:36:28.927378 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-9c5ww" Jan 31 04:36:28 crc kubenswrapper[4827]: I0131 04:36:28.929681 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-26zr9" Jan 31 04:36:28 crc kubenswrapper[4827]: I0131 04:36:28.929951 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 31 04:36:28 crc kubenswrapper[4827]: I0131 04:36:28.944920 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-9c5ww"] Jan 31 04:36:29 crc kubenswrapper[4827]: I0131 04:36:29.069469 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9s5z\" (UniqueName: \"kubernetes.io/projected/0c4fd939-69ab-4942-b829-5b4abab385db-kube-api-access-z9s5z\") pod \"manila-db-sync-9c5ww\" (UID: \"0c4fd939-69ab-4942-b829-5b4abab385db\") " pod="openstack/manila-db-sync-9c5ww" Jan 31 04:36:29 crc kubenswrapper[4827]: I0131 04:36:29.069532 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/0c4fd939-69ab-4942-b829-5b4abab385db-job-config-data\") pod \"manila-db-sync-9c5ww\" (UID: \"0c4fd939-69ab-4942-b829-5b4abab385db\") " pod="openstack/manila-db-sync-9c5ww" Jan 31 04:36:29 crc kubenswrapper[4827]: I0131 04:36:29.069583 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c4fd939-69ab-4942-b829-5b4abab385db-combined-ca-bundle\") pod \"manila-db-sync-9c5ww\" (UID: \"0c4fd939-69ab-4942-b829-5b4abab385db\") " pod="openstack/manila-db-sync-9c5ww" Jan 31 04:36:29 crc kubenswrapper[4827]: I0131 04:36:29.069610 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c4fd939-69ab-4942-b829-5b4abab385db-config-data\") pod \"manila-db-sync-9c5ww\" (UID: \"0c4fd939-69ab-4942-b829-5b4abab385db\") " pod="openstack/manila-db-sync-9c5ww" Jan 31 04:36:29 crc kubenswrapper[4827]: I0131 04:36:29.171660 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9s5z\" (UniqueName: \"kubernetes.io/projected/0c4fd939-69ab-4942-b829-5b4abab385db-kube-api-access-z9s5z\") pod \"manila-db-sync-9c5ww\" (UID: \"0c4fd939-69ab-4942-b829-5b4abab385db\") " pod="openstack/manila-db-sync-9c5ww" Jan 31 04:36:29 crc kubenswrapper[4827]: I0131 04:36:29.171740 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/0c4fd939-69ab-4942-b829-5b4abab385db-job-config-data\") pod \"manila-db-sync-9c5ww\" (UID: \"0c4fd939-69ab-4942-b829-5b4abab385db\") " pod="openstack/manila-db-sync-9c5ww" Jan 31 04:36:29 crc kubenswrapper[4827]: I0131 04:36:29.171795 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c4fd939-69ab-4942-b829-5b4abab385db-combined-ca-bundle\") pod \"manila-db-sync-9c5ww\" (UID: \"0c4fd939-69ab-4942-b829-5b4abab385db\") " pod="openstack/manila-db-sync-9c5ww" Jan 31 04:36:29 crc kubenswrapper[4827]: I0131 04:36:29.171821 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c4fd939-69ab-4942-b829-5b4abab385db-config-data\") pod \"manila-db-sync-9c5ww\" (UID: \"0c4fd939-69ab-4942-b829-5b4abab385db\") " pod="openstack/manila-db-sync-9c5ww" Jan 31 04:36:29 crc kubenswrapper[4827]: I0131 04:36:29.187163 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/0c4fd939-69ab-4942-b829-5b4abab385db-job-config-data\") pod \"manila-db-sync-9c5ww\" (UID: \"0c4fd939-69ab-4942-b829-5b4abab385db\") " pod="openstack/manila-db-sync-9c5ww" Jan 31 04:36:29 crc kubenswrapper[4827]: I0131 04:36:29.187436 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c4fd939-69ab-4942-b829-5b4abab385db-config-data\") pod \"manila-db-sync-9c5ww\" (UID: \"0c4fd939-69ab-4942-b829-5b4abab385db\") " pod="openstack/manila-db-sync-9c5ww" Jan 31 04:36:29 crc kubenswrapper[4827]: I0131 04:36:29.193509 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9s5z\" (UniqueName: \"kubernetes.io/projected/0c4fd939-69ab-4942-b829-5b4abab385db-kube-api-access-z9s5z\") pod \"manila-db-sync-9c5ww\" (UID: \"0c4fd939-69ab-4942-b829-5b4abab385db\") " pod="openstack/manila-db-sync-9c5ww" Jan 31 04:36:29 crc kubenswrapper[4827]: I0131 04:36:29.195223 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c4fd939-69ab-4942-b829-5b4abab385db-combined-ca-bundle\") pod \"manila-db-sync-9c5ww\" (UID: \"0c4fd939-69ab-4942-b829-5b4abab385db\") " pod="openstack/manila-db-sync-9c5ww" Jan 31 04:36:29 crc kubenswrapper[4827]: I0131 04:36:29.257374 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-9c5ww" Jan 31 04:36:32 crc kubenswrapper[4827]: I0131 04:36:32.792284 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-9c5ww"] Jan 31 04:36:32 crc kubenswrapper[4827]: W0131 04:36:32.825436 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c4fd939_69ab_4942_b829_5b4abab385db.slice/crio-d7efc60207f53519949db008a12a20f33315cac81db70994ce2d8b68ca27c8be WatchSource:0}: Error finding container d7efc60207f53519949db008a12a20f33315cac81db70994ce2d8b68ca27c8be: Status 404 returned error can't find the container with id d7efc60207f53519949db008a12a20f33315cac81db70994ce2d8b68ca27c8be Jan 31 04:36:33 crc kubenswrapper[4827]: I0131 04:36:33.586104 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-9c5ww" event={"ID":"0c4fd939-69ab-4942-b829-5b4abab385db","Type":"ContainerStarted","Data":"d7efc60207f53519949db008a12a20f33315cac81db70994ce2d8b68ca27c8be"} Jan 31 04:36:33 crc kubenswrapper[4827]: I0131 04:36:33.591601 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5649566987-dn2gq" event={"ID":"e44c9e88-212e-4178-a1fb-ce9b1896d73f","Type":"ContainerStarted","Data":"dc79d9a2d05676b35d0fdd9dd778cc645898bc7bbf98621544d53342e794612d"} Jan 31 04:36:33 crc kubenswrapper[4827]: I0131 04:36:33.591641 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5649566987-dn2gq" event={"ID":"e44c9e88-212e-4178-a1fb-ce9b1896d73f","Type":"ContainerStarted","Data":"1cf3d27ab9bc575d5024d55a381eed6f0137bb93a7a6cb8ba372d4d9877e7a42"} Jan 31 04:36:33 crc kubenswrapper[4827]: I0131 04:36:33.591752 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5649566987-dn2gq" podUID="e44c9e88-212e-4178-a1fb-ce9b1896d73f" containerName="horizon-log" containerID="cri-o://1cf3d27ab9bc575d5024d55a381eed6f0137bb93a7a6cb8ba372d4d9877e7a42" gracePeriod=30 Jan 31 04:36:33 crc kubenswrapper[4827]: I0131 04:36:33.592694 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5649566987-dn2gq" podUID="e44c9e88-212e-4178-a1fb-ce9b1896d73f" containerName="horizon" containerID="cri-o://dc79d9a2d05676b35d0fdd9dd778cc645898bc7bbf98621544d53342e794612d" gracePeriod=30 Jan 31 04:36:33 crc kubenswrapper[4827]: I0131 04:36:33.597264 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cdd9cb94b-xgkxs" event={"ID":"e277e6d3-f889-425a-abd6-3344f860bfd9","Type":"ContainerStarted","Data":"5bc769f4481cef184f8dca7f6178fc8af5b40a6a985196e85ae05d7b98c94b1b"} Jan 31 04:36:33 crc kubenswrapper[4827]: I0131 04:36:33.597291 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cdd9cb94b-xgkxs" event={"ID":"e277e6d3-f889-425a-abd6-3344f860bfd9","Type":"ContainerStarted","Data":"fa2ea3446ece2961ee86c778ecdec6b7d57be1cf1c402d004e9a43d4bbd78377"} Jan 31 04:36:33 crc kubenswrapper[4827]: I0131 04:36:33.601594 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dd8c8746d-r25sr" event={"ID":"19b64bcf-afad-4b01-8d57-1c1b56bb170f","Type":"ContainerStarted","Data":"5abd8359a16259c69647cd6d577d8832c08ea5d049956ef717e6804a142f3c46"} Jan 31 04:36:33 crc kubenswrapper[4827]: I0131 04:36:33.601666 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dd8c8746d-r25sr" event={"ID":"19b64bcf-afad-4b01-8d57-1c1b56bb170f","Type":"ContainerStarted","Data":"f5e5c223af1a11e3e1a9641d5b374470e33e775de4efea150f65a26055e39e20"} Jan 31 04:36:33 crc kubenswrapper[4827]: I0131 04:36:33.606548 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5db647897-mx8gj" event={"ID":"0a8e3a93-e4a4-41c1-b558-88d28e96ef52","Type":"ContainerStarted","Data":"054cb93dcee9e3aa3135bd5b78636260d1cc82953b362a08e9980573a335bbfe"} Jan 31 04:36:33 crc kubenswrapper[4827]: I0131 04:36:33.606603 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5db647897-mx8gj" event={"ID":"0a8e3a93-e4a4-41c1-b558-88d28e96ef52","Type":"ContainerStarted","Data":"982885989dfa2eabcef1f83bdcefbb60ad897f3b61c171664f5faf3d56eb18c4"} Jan 31 04:36:33 crc kubenswrapper[4827]: I0131 04:36:33.606715 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5db647897-mx8gj" podUID="0a8e3a93-e4a4-41c1-b558-88d28e96ef52" containerName="horizon-log" containerID="cri-o://982885989dfa2eabcef1f83bdcefbb60ad897f3b61c171664f5faf3d56eb18c4" gracePeriod=30 Jan 31 04:36:33 crc kubenswrapper[4827]: I0131 04:36:33.607021 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5db647897-mx8gj" podUID="0a8e3a93-e4a4-41c1-b558-88d28e96ef52" containerName="horizon" containerID="cri-o://054cb93dcee9e3aa3135bd5b78636260d1cc82953b362a08e9980573a335bbfe" gracePeriod=30 Jan 31 04:36:33 crc kubenswrapper[4827]: I0131 04:36:33.617287 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5649566987-dn2gq" podStartSLOduration=3.10012339 podStartE2EDuration="15.617272176s" podCreationTimestamp="2026-01-31 04:36:18 +0000 UTC" firstStartedPulling="2026-01-31 04:36:19.774097493 +0000 UTC m=+2972.461177942" lastFinishedPulling="2026-01-31 04:36:32.291246269 +0000 UTC m=+2984.978326728" observedRunningTime="2026-01-31 04:36:33.615261794 +0000 UTC m=+2986.302342253" watchObservedRunningTime="2026-01-31 04:36:33.617272176 +0000 UTC m=+2986.304352625" Jan 31 04:36:33 crc kubenswrapper[4827]: I0131 04:36:33.644763 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-cdd9cb94b-xgkxs" podStartSLOduration=2.9675845819999997 podStartE2EDuration="12.644741508s" podCreationTimestamp="2026-01-31 04:36:21 +0000 UTC" firstStartedPulling="2026-01-31 04:36:22.682760735 +0000 UTC m=+2975.369841184" lastFinishedPulling="2026-01-31 04:36:32.359917651 +0000 UTC m=+2985.046998110" observedRunningTime="2026-01-31 04:36:33.638383083 +0000 UTC m=+2986.325463552" watchObservedRunningTime="2026-01-31 04:36:33.644741508 +0000 UTC m=+2986.331821957" Jan 31 04:36:33 crc kubenswrapper[4827]: I0131 04:36:33.658159 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5db647897-mx8gj" podStartSLOduration=3.303616512 podStartE2EDuration="15.658132758s" podCreationTimestamp="2026-01-31 04:36:18 +0000 UTC" firstStartedPulling="2026-01-31 04:36:19.962333777 +0000 UTC m=+2972.649414226" lastFinishedPulling="2026-01-31 04:36:32.316850003 +0000 UTC m=+2985.003930472" observedRunningTime="2026-01-31 04:36:33.657700804 +0000 UTC m=+2986.344781263" watchObservedRunningTime="2026-01-31 04:36:33.658132758 +0000 UTC m=+2986.345213247" Jan 31 04:36:33 crc kubenswrapper[4827]: I0131 04:36:33.690395 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5dd8c8746d-r25sr" podStartSLOduration=3.397278919 podStartE2EDuration="12.690375514s" podCreationTimestamp="2026-01-31 04:36:21 +0000 UTC" firstStartedPulling="2026-01-31 04:36:23.087234182 +0000 UTC m=+2975.774314631" lastFinishedPulling="2026-01-31 04:36:32.380330757 +0000 UTC m=+2985.067411226" observedRunningTime="2026-01-31 04:36:33.676931513 +0000 UTC m=+2986.364011972" watchObservedRunningTime="2026-01-31 04:36:33.690375514 +0000 UTC m=+2986.377455973" Jan 31 04:36:33 crc kubenswrapper[4827]: I0131 04:36:33.872118 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 04:36:33 crc kubenswrapper[4827]: I0131 04:36:33.872158 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 04:36:33 crc kubenswrapper[4827]: I0131 04:36:33.909558 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 04:36:33 crc kubenswrapper[4827]: I0131 04:36:33.923498 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 04:36:34 crc kubenswrapper[4827]: I0131 04:36:34.615838 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 04:36:34 crc kubenswrapper[4827]: I0131 04:36:34.616223 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 04:36:34 crc kubenswrapper[4827]: I0131 04:36:34.838686 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 04:36:34 crc kubenswrapper[4827]: I0131 04:36:34.838744 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 04:36:34 crc kubenswrapper[4827]: I0131 04:36:34.880083 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 04:36:34 crc kubenswrapper[4827]: I0131 04:36:34.884820 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 04:36:35 crc kubenswrapper[4827]: I0131 04:36:35.625361 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 04:36:35 crc kubenswrapper[4827]: I0131 04:36:35.625390 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 04:36:37 crc kubenswrapper[4827]: I0131 04:36:37.642200 4827 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 04:36:37 crc kubenswrapper[4827]: I0131 04:36:37.642652 4827 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 04:36:39 crc kubenswrapper[4827]: I0131 04:36:39.039642 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5649566987-dn2gq" Jan 31 04:36:39 crc kubenswrapper[4827]: I0131 04:36:39.242477 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5db647897-mx8gj" Jan 31 04:36:39 crc kubenswrapper[4827]: I0131 04:36:39.555387 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 04:36:39 crc kubenswrapper[4827]: I0131 04:36:39.555763 4827 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 04:36:39 crc kubenswrapper[4827]: I0131 04:36:39.558957 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 04:36:39 crc kubenswrapper[4827]: I0131 04:36:39.559044 4827 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 04:36:39 crc kubenswrapper[4827]: I0131 04:36:39.564944 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 04:36:39 crc kubenswrapper[4827]: I0131 04:36:39.567414 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 04:36:39 crc kubenswrapper[4827]: I0131 04:36:39.689506 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-9c5ww" event={"ID":"0c4fd939-69ab-4942-b829-5b4abab385db","Type":"ContainerStarted","Data":"c6d4b03242ed3d43c917ca3f38999e5747b76f4cb1881d2cdc3b3da8068cff44"} Jan 31 04:36:39 crc kubenswrapper[4827]: I0131 04:36:39.732509 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-9c5ww" podStartSLOduration=5.48938899 podStartE2EDuration="11.732489824s" podCreationTimestamp="2026-01-31 04:36:28 +0000 UTC" firstStartedPulling="2026-01-31 04:36:32.828007176 +0000 UTC m=+2985.515087625" lastFinishedPulling="2026-01-31 04:36:39.07110801 +0000 UTC m=+2991.758188459" observedRunningTime="2026-01-31 04:36:39.717164675 +0000 UTC m=+2992.404245124" watchObservedRunningTime="2026-01-31 04:36:39.732489824 +0000 UTC m=+2992.419570263" Jan 31 04:36:41 crc kubenswrapper[4827]: I0131 04:36:41.110519 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:36:41 crc kubenswrapper[4827]: E0131 04:36:41.111059 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:36:42 crc kubenswrapper[4827]: I0131 04:36:42.122811 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:42 crc kubenswrapper[4827]: I0131 04:36:42.123300 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:42 crc kubenswrapper[4827]: I0131 04:36:42.124413 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-cdd9cb94b-xgkxs" podUID="e277e6d3-f889-425a-abd6-3344f860bfd9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Jan 31 04:36:42 crc kubenswrapper[4827]: I0131 04:36:42.354608 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:42 crc kubenswrapper[4827]: I0131 04:36:42.354985 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:36:42 crc kubenswrapper[4827]: I0131 04:36:42.356137 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5dd8c8746d-r25sr" podUID="19b64bcf-afad-4b01-8d57-1c1b56bb170f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.248:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.248:8443: connect: connection refused" Jan 31 04:36:50 crc kubenswrapper[4827]: I0131 04:36:50.794066 4827 generic.go:334] "Generic (PLEG): container finished" podID="0c4fd939-69ab-4942-b829-5b4abab385db" containerID="c6d4b03242ed3d43c917ca3f38999e5747b76f4cb1881d2cdc3b3da8068cff44" exitCode=0 Jan 31 04:36:50 crc kubenswrapper[4827]: I0131 04:36:50.794499 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-9c5ww" event={"ID":"0c4fd939-69ab-4942-b829-5b4abab385db","Type":"ContainerDied","Data":"c6d4b03242ed3d43c917ca3f38999e5747b76f4cb1881d2cdc3b3da8068cff44"} Jan 31 04:36:52 crc kubenswrapper[4827]: I0131 04:36:52.294014 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-9c5ww" Jan 31 04:36:52 crc kubenswrapper[4827]: I0131 04:36:52.351636 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c4fd939-69ab-4942-b829-5b4abab385db-combined-ca-bundle\") pod \"0c4fd939-69ab-4942-b829-5b4abab385db\" (UID: \"0c4fd939-69ab-4942-b829-5b4abab385db\") " Jan 31 04:36:52 crc kubenswrapper[4827]: I0131 04:36:52.351716 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/0c4fd939-69ab-4942-b829-5b4abab385db-job-config-data\") pod \"0c4fd939-69ab-4942-b829-5b4abab385db\" (UID: \"0c4fd939-69ab-4942-b829-5b4abab385db\") " Jan 31 04:36:52 crc kubenswrapper[4827]: I0131 04:36:52.351788 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c4fd939-69ab-4942-b829-5b4abab385db-config-data\") pod \"0c4fd939-69ab-4942-b829-5b4abab385db\" (UID: \"0c4fd939-69ab-4942-b829-5b4abab385db\") " Jan 31 04:36:52 crc kubenswrapper[4827]: I0131 04:36:52.352152 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9s5z\" (UniqueName: \"kubernetes.io/projected/0c4fd939-69ab-4942-b829-5b4abab385db-kube-api-access-z9s5z\") pod \"0c4fd939-69ab-4942-b829-5b4abab385db\" (UID: \"0c4fd939-69ab-4942-b829-5b4abab385db\") " Jan 31 04:36:52 crc kubenswrapper[4827]: I0131 04:36:52.356591 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5dd8c8746d-r25sr" podUID="19b64bcf-afad-4b01-8d57-1c1b56bb170f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.248:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.248:8443: connect: connection refused" Jan 31 04:36:52 crc kubenswrapper[4827]: I0131 04:36:52.363169 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c4fd939-69ab-4942-b829-5b4abab385db-config-data" (OuterVolumeSpecName: "config-data") pod "0c4fd939-69ab-4942-b829-5b4abab385db" (UID: "0c4fd939-69ab-4942-b829-5b4abab385db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:52 crc kubenswrapper[4827]: I0131 04:36:52.364293 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c4fd939-69ab-4942-b829-5b4abab385db-kube-api-access-z9s5z" (OuterVolumeSpecName: "kube-api-access-z9s5z") pod "0c4fd939-69ab-4942-b829-5b4abab385db" (UID: "0c4fd939-69ab-4942-b829-5b4abab385db"). InnerVolumeSpecName "kube-api-access-z9s5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:36:52 crc kubenswrapper[4827]: I0131 04:36:52.370203 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c4fd939-69ab-4942-b829-5b4abab385db-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "0c4fd939-69ab-4942-b829-5b4abab385db" (UID: "0c4fd939-69ab-4942-b829-5b4abab385db"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:52 crc kubenswrapper[4827]: I0131 04:36:52.400850 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c4fd939-69ab-4942-b829-5b4abab385db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c4fd939-69ab-4942-b829-5b4abab385db" (UID: "0c4fd939-69ab-4942-b829-5b4abab385db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:52 crc kubenswrapper[4827]: I0131 04:36:52.454199 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9s5z\" (UniqueName: \"kubernetes.io/projected/0c4fd939-69ab-4942-b829-5b4abab385db-kube-api-access-z9s5z\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:52 crc kubenswrapper[4827]: I0131 04:36:52.454230 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c4fd939-69ab-4942-b829-5b4abab385db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:52 crc kubenswrapper[4827]: I0131 04:36:52.454239 4827 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/0c4fd939-69ab-4942-b829-5b4abab385db-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:52 crc kubenswrapper[4827]: I0131 04:36:52.454249 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c4fd939-69ab-4942-b829-5b4abab385db-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:52 crc kubenswrapper[4827]: I0131 04:36:52.813575 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-9c5ww" event={"ID":"0c4fd939-69ab-4942-b829-5b4abab385db","Type":"ContainerDied","Data":"d7efc60207f53519949db008a12a20f33315cac81db70994ce2d8b68ca27c8be"} Jan 31 04:36:52 crc kubenswrapper[4827]: I0131 04:36:52.813611 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7efc60207f53519949db008a12a20f33315cac81db70994ce2d8b68ca27c8be" Jan 31 04:36:52 crc kubenswrapper[4827]: I0131 04:36:52.813668 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-9c5ww" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.088551 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 04:36:53 crc kubenswrapper[4827]: E0131 04:36:53.089052 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c4fd939-69ab-4942-b829-5b4abab385db" containerName="manila-db-sync" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.089073 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c4fd939-69ab-4942-b829-5b4abab385db" containerName="manila-db-sync" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.089342 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c4fd939-69ab-4942-b829-5b4abab385db" containerName="manila-db-sync" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.090486 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.093434 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.093585 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.093625 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.093546 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-26zr9" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.106779 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.166905 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " pod="openstack/manila-scheduler-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.167205 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcc4f548-655b-4c88-a55d-f69acb1d30f1-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " pod="openstack/manila-scheduler-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.167341 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " pod="openstack/manila-scheduler-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.167433 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-config-data\") pod \"manila-scheduler-0\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " pod="openstack/manila-scheduler-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.167527 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-scripts\") pod \"manila-scheduler-0\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " pod="openstack/manila-scheduler-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.167611 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59gxd\" (UniqueName: \"kubernetes.io/projected/fcc4f548-655b-4c88-a55d-f69acb1d30f1-kube-api-access-59gxd\") pod \"manila-scheduler-0\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " pod="openstack/manila-scheduler-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.217016 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.228723 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.233336 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.256552 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.269217 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.269523 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " pod="openstack/manila-scheduler-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.269627 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.269724 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxcfv\" (UniqueName: \"kubernetes.io/projected/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-kube-api-access-rxcfv\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.269825 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.269936 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcc4f548-655b-4c88-a55d-f69acb1d30f1-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " pod="openstack/manila-scheduler-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.270039 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-ceph\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.270147 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " pod="openstack/manila-scheduler-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.270247 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-config-data\") pod \"manila-scheduler-0\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " pod="openstack/manila-scheduler-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.270403 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-config-data\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.270487 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-scripts\") pod \"manila-scheduler-0\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " pod="openstack/manila-scheduler-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.270565 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59gxd\" (UniqueName: \"kubernetes.io/projected/fcc4f548-655b-4c88-a55d-f69acb1d30f1-kube-api-access-59gxd\") pod \"manila-scheduler-0\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " pod="openstack/manila-scheduler-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.270667 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-scripts\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.270751 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.270048 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcc4f548-655b-4c88-a55d-f69acb1d30f1-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " pod="openstack/manila-scheduler-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.276340 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " pod="openstack/manila-scheduler-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.277749 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " pod="openstack/manila-scheduler-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.278773 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-config-data\") pod \"manila-scheduler-0\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " pod="openstack/manila-scheduler-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.314262 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-scripts\") pod \"manila-scheduler-0\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " pod="openstack/manila-scheduler-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.315423 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59gxd\" (UniqueName: \"kubernetes.io/projected/fcc4f548-655b-4c88-a55d-f69acb1d30f1-kube-api-access-59gxd\") pod \"manila-scheduler-0\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " pod="openstack/manila-scheduler-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.385549 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-ceph\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.385651 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-config-data\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.385685 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-scripts\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.385705 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.385747 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.385781 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.385806 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxcfv\" (UniqueName: \"kubernetes.io/projected/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-kube-api-access-rxcfv\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.385827 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.388725 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.392958 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.394671 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-config-data\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.402062 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.418383 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-scripts\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.418958 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-ceph\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.419310 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.424548 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.429384 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxcfv\" (UniqueName: \"kubernetes.io/projected/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-kube-api-access-rxcfv\") pod \"manila-share-share1-0\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.504930 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-b4n6f"] Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.524513 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.547027 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.552374 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-b4n6f"] Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.589415 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8921f642-11e6-4efc-9441-9f3ee68ed074-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-b4n6f\" (UID: \"8921f642-11e6-4efc-9441-9f3ee68ed074\") " pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.589463 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdw5h\" (UniqueName: \"kubernetes.io/projected/8921f642-11e6-4efc-9441-9f3ee68ed074-kube-api-access-mdw5h\") pod \"dnsmasq-dns-76b5fdb995-b4n6f\" (UID: \"8921f642-11e6-4efc-9441-9f3ee68ed074\") " pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.589491 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8921f642-11e6-4efc-9441-9f3ee68ed074-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-b4n6f\" (UID: \"8921f642-11e6-4efc-9441-9f3ee68ed074\") " pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.589580 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8921f642-11e6-4efc-9441-9f3ee68ed074-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-b4n6f\" (UID: \"8921f642-11e6-4efc-9441-9f3ee68ed074\") " pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.589622 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8921f642-11e6-4efc-9441-9f3ee68ed074-config\") pod \"dnsmasq-dns-76b5fdb995-b4n6f\" (UID: \"8921f642-11e6-4efc-9441-9f3ee68ed074\") " pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.589644 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8921f642-11e6-4efc-9441-9f3ee68ed074-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-b4n6f\" (UID: \"8921f642-11e6-4efc-9441-9f3ee68ed074\") " pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.621999 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.623423 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.638283 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.648613 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.692417 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.692749 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-config-data-custom\") pod \"manila-api-0\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.692770 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/385fce4a-8067-476b-9dda-72f222cfa974-etc-machine-id\") pod \"manila-api-0\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.692796 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8921f642-11e6-4efc-9441-9f3ee68ed074-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-b4n6f\" (UID: \"8921f642-11e6-4efc-9441-9f3ee68ed074\") " pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.692818 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdw5h\" (UniqueName: \"kubernetes.io/projected/8921f642-11e6-4efc-9441-9f3ee68ed074-kube-api-access-mdw5h\") pod \"dnsmasq-dns-76b5fdb995-b4n6f\" (UID: \"8921f642-11e6-4efc-9441-9f3ee68ed074\") " pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.692839 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8921f642-11e6-4efc-9441-9f3ee68ed074-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-b4n6f\" (UID: \"8921f642-11e6-4efc-9441-9f3ee68ed074\") " pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.692875 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-scripts\") pod \"manila-api-0\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.692936 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-config-data\") pod \"manila-api-0\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.692951 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/385fce4a-8067-476b-9dda-72f222cfa974-logs\") pod \"manila-api-0\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.692973 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8921f642-11e6-4efc-9441-9f3ee68ed074-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-b4n6f\" (UID: \"8921f642-11e6-4efc-9441-9f3ee68ed074\") " pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.693004 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8921f642-11e6-4efc-9441-9f3ee68ed074-config\") pod \"dnsmasq-dns-76b5fdb995-b4n6f\" (UID: \"8921f642-11e6-4efc-9441-9f3ee68ed074\") " pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.693020 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8921f642-11e6-4efc-9441-9f3ee68ed074-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-b4n6f\" (UID: \"8921f642-11e6-4efc-9441-9f3ee68ed074\") " pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.694599 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8921f642-11e6-4efc-9441-9f3ee68ed074-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-b4n6f\" (UID: \"8921f642-11e6-4efc-9441-9f3ee68ed074\") " pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.694932 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trr6b\" (UniqueName: \"kubernetes.io/projected/385fce4a-8067-476b-9dda-72f222cfa974-kube-api-access-trr6b\") pod \"manila-api-0\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.695352 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8921f642-11e6-4efc-9441-9f3ee68ed074-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-b4n6f\" (UID: \"8921f642-11e6-4efc-9441-9f3ee68ed074\") " pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.695816 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8921f642-11e6-4efc-9441-9f3ee68ed074-config\") pod \"dnsmasq-dns-76b5fdb995-b4n6f\" (UID: \"8921f642-11e6-4efc-9441-9f3ee68ed074\") " pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.696490 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8921f642-11e6-4efc-9441-9f3ee68ed074-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-b4n6f\" (UID: \"8921f642-11e6-4efc-9441-9f3ee68ed074\") " pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.698246 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8921f642-11e6-4efc-9441-9f3ee68ed074-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-b4n6f\" (UID: \"8921f642-11e6-4efc-9441-9f3ee68ed074\") " pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.737975 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdw5h\" (UniqueName: \"kubernetes.io/projected/8921f642-11e6-4efc-9441-9f3ee68ed074-kube-api-access-mdw5h\") pod \"dnsmasq-dns-76b5fdb995-b4n6f\" (UID: \"8921f642-11e6-4efc-9441-9f3ee68ed074\") " pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.797793 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.797840 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-config-data-custom\") pod \"manila-api-0\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.797863 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/385fce4a-8067-476b-9dda-72f222cfa974-etc-machine-id\") pod \"manila-api-0\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.797929 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-scripts\") pod \"manila-api-0\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.797982 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-config-data\") pod \"manila-api-0\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.797996 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/385fce4a-8067-476b-9dda-72f222cfa974-logs\") pod \"manila-api-0\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.798042 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trr6b\" (UniqueName: \"kubernetes.io/projected/385fce4a-8067-476b-9dda-72f222cfa974-kube-api-access-trr6b\") pod \"manila-api-0\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.802247 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/385fce4a-8067-476b-9dda-72f222cfa974-etc-machine-id\") pod \"manila-api-0\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.802870 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-scripts\") pod \"manila-api-0\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.802972 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/385fce4a-8067-476b-9dda-72f222cfa974-logs\") pod \"manila-api-0\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.803951 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.814925 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-config-data-custom\") pod \"manila-api-0\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.815557 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-config-data\") pod \"manila-api-0\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.818311 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trr6b\" (UniqueName: \"kubernetes.io/projected/385fce4a-8067-476b-9dda-72f222cfa974-kube-api-access-trr6b\") pod \"manila-api-0\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " pod="openstack/manila-api-0" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.898509 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:36:53 crc kubenswrapper[4827]: I0131 04:36:53.954362 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 04:36:54 crc kubenswrapper[4827]: I0131 04:36:54.069348 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 04:36:54 crc kubenswrapper[4827]: I0131 04:36:54.111119 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:36:54 crc kubenswrapper[4827]: E0131 04:36:54.111337 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:36:54 crc kubenswrapper[4827]: I0131 04:36:54.200488 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:54 crc kubenswrapper[4827]: I0131 04:36:54.315113 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 04:36:54 crc kubenswrapper[4827]: I0131 04:36:54.456747 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-b4n6f"] Jan 31 04:36:54 crc kubenswrapper[4827]: W0131 04:36:54.469269 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8921f642_11e6_4efc_9441_9f3ee68ed074.slice/crio-8695cfb7529a10c86c12b5ae8232c0a00e2837005ab794205ef2a424744eb954 WatchSource:0}: Error finding container 8695cfb7529a10c86c12b5ae8232c0a00e2837005ab794205ef2a424744eb954: Status 404 returned error can't find the container with id 8695cfb7529a10c86c12b5ae8232c0a00e2837005ab794205ef2a424744eb954 Jan 31 04:36:54 crc kubenswrapper[4827]: I0131 04:36:54.586196 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 31 04:36:54 crc kubenswrapper[4827]: I0131 04:36:54.835837 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"fcc4f548-655b-4c88-a55d-f69acb1d30f1","Type":"ContainerStarted","Data":"487ebd2f399821beb1773f0d40eee32eac5e6d8bca5550b7588f5418314084f4"} Jan 31 04:36:54 crc kubenswrapper[4827]: I0131 04:36:54.837460 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"385fce4a-8067-476b-9dda-72f222cfa974","Type":"ContainerStarted","Data":"69dd047e5b1628251b89d0b4287424bf7be5058aac397a0e449cf1c5e8a7b584"} Jan 31 04:36:54 crc kubenswrapper[4827]: I0131 04:36:54.840993 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"324ece2d-b7d4-426f-8bb1-5832b52ee7bd","Type":"ContainerStarted","Data":"e96465d154738df1780baeeffa377006ce16daf5dfd0288e6ed57288cdc492ef"} Jan 31 04:36:54 crc kubenswrapper[4827]: I0131 04:36:54.843584 4827 generic.go:334] "Generic (PLEG): container finished" podID="8921f642-11e6-4efc-9441-9f3ee68ed074" containerID="40b9a95d7965f3d0aed428a875d462eb892545db6a10f4ecfe6a93039fd36a4a" exitCode=0 Jan 31 04:36:54 crc kubenswrapper[4827]: I0131 04:36:54.843621 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" event={"ID":"8921f642-11e6-4efc-9441-9f3ee68ed074","Type":"ContainerDied","Data":"40b9a95d7965f3d0aed428a875d462eb892545db6a10f4ecfe6a93039fd36a4a"} Jan 31 04:36:54 crc kubenswrapper[4827]: I0131 04:36:54.843640 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" event={"ID":"8921f642-11e6-4efc-9441-9f3ee68ed074","Type":"ContainerStarted","Data":"8695cfb7529a10c86c12b5ae8232c0a00e2837005ab794205ef2a424744eb954"} Jan 31 04:36:55 crc kubenswrapper[4827]: I0131 04:36:55.856996 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" event={"ID":"8921f642-11e6-4efc-9441-9f3ee68ed074","Type":"ContainerStarted","Data":"91dcb95f80292672cf6983a8802b1905f41494287b46ff70234b982347e8a121"} Jan 31 04:36:55 crc kubenswrapper[4827]: I0131 04:36:55.858034 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:36:55 crc kubenswrapper[4827]: I0131 04:36:55.862126 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"fcc4f548-655b-4c88-a55d-f69acb1d30f1","Type":"ContainerStarted","Data":"7799f664949899427d8197934905104485d0cb6b67e38e624c0440d4ddab13e7"} Jan 31 04:36:55 crc kubenswrapper[4827]: I0131 04:36:55.863849 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"385fce4a-8067-476b-9dda-72f222cfa974","Type":"ContainerStarted","Data":"60dd04f1c3575ce230d49e0a78ec2fd2edd6351ef1b5811360ea3fe06552a7aa"} Jan 31 04:36:55 crc kubenswrapper[4827]: I0131 04:36:55.863873 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"385fce4a-8067-476b-9dda-72f222cfa974","Type":"ContainerStarted","Data":"00d73d009b582a21d30f1d5e54f33aa93f41e852da293fa772e9df7ec00b8ca1"} Jan 31 04:36:55 crc kubenswrapper[4827]: I0131 04:36:55.864502 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 31 04:36:55 crc kubenswrapper[4827]: I0131 04:36:55.880732 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" podStartSLOduration=2.880715505 podStartE2EDuration="2.880715505s" podCreationTimestamp="2026-01-31 04:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:36:55.872931907 +0000 UTC m=+3008.560012356" watchObservedRunningTime="2026-01-31 04:36:55.880715505 +0000 UTC m=+3008.567795954" Jan 31 04:36:56 crc kubenswrapper[4827]: I0131 04:36:56.258069 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.25804685 podStartE2EDuration="3.25804685s" podCreationTimestamp="2026-01-31 04:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:36:55.894095004 +0000 UTC m=+3008.581175453" watchObservedRunningTime="2026-01-31 04:36:56.25804685 +0000 UTC m=+3008.945127309" Jan 31 04:36:56 crc kubenswrapper[4827]: I0131 04:36:56.268027 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 31 04:36:56 crc kubenswrapper[4827]: I0131 04:36:56.483322 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:36:56 crc kubenswrapper[4827]: I0131 04:36:56.902235 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"fcc4f548-655b-4c88-a55d-f69acb1d30f1","Type":"ContainerStarted","Data":"4ae5d4a39c303857274d0fffb65acd4f318f888fb7339395a29e04858052909e"} Jan 31 04:36:56 crc kubenswrapper[4827]: I0131 04:36:56.936155 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.248280011 podStartE2EDuration="3.936139625s" podCreationTimestamp="2026-01-31 04:36:53 +0000 UTC" firstStartedPulling="2026-01-31 04:36:54.077962449 +0000 UTC m=+3006.765042898" lastFinishedPulling="2026-01-31 04:36:54.765822043 +0000 UTC m=+3007.452902512" observedRunningTime="2026-01-31 04:36:56.934097543 +0000 UTC m=+3009.621178012" watchObservedRunningTime="2026-01-31 04:36:56.936139625 +0000 UTC m=+3009.623220074" Jan 31 04:36:57 crc kubenswrapper[4827]: I0131 04:36:57.908588 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="385fce4a-8067-476b-9dda-72f222cfa974" containerName="manila-api-log" containerID="cri-o://00d73d009b582a21d30f1d5e54f33aa93f41e852da293fa772e9df7ec00b8ca1" gracePeriod=30 Jan 31 04:36:57 crc kubenswrapper[4827]: I0131 04:36:57.908672 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="385fce4a-8067-476b-9dda-72f222cfa974" containerName="manila-api" containerID="cri-o://60dd04f1c3575ce230d49e0a78ec2fd2edd6351ef1b5811360ea3fe06552a7aa" gracePeriod=30 Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.653587 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.656704 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-config-data-custom\") pod \"385fce4a-8067-476b-9dda-72f222cfa974\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.656736 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/385fce4a-8067-476b-9dda-72f222cfa974-logs\") pod \"385fce4a-8067-476b-9dda-72f222cfa974\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.656782 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-scripts\") pod \"385fce4a-8067-476b-9dda-72f222cfa974\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.656833 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/385fce4a-8067-476b-9dda-72f222cfa974-etc-machine-id\") pod \"385fce4a-8067-476b-9dda-72f222cfa974\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.656963 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trr6b\" (UniqueName: \"kubernetes.io/projected/385fce4a-8067-476b-9dda-72f222cfa974-kube-api-access-trr6b\") pod \"385fce4a-8067-476b-9dda-72f222cfa974\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.657143 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-combined-ca-bundle\") pod \"385fce4a-8067-476b-9dda-72f222cfa974\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.657186 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-config-data\") pod \"385fce4a-8067-476b-9dda-72f222cfa974\" (UID: \"385fce4a-8067-476b-9dda-72f222cfa974\") " Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.659996 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/385fce4a-8067-476b-9dda-72f222cfa974-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "385fce4a-8067-476b-9dda-72f222cfa974" (UID: "385fce4a-8067-476b-9dda-72f222cfa974"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.660467 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/385fce4a-8067-476b-9dda-72f222cfa974-logs" (OuterVolumeSpecName: "logs") pod "385fce4a-8067-476b-9dda-72f222cfa974" (UID: "385fce4a-8067-476b-9dda-72f222cfa974"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.663409 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/385fce4a-8067-476b-9dda-72f222cfa974-kube-api-access-trr6b" (OuterVolumeSpecName: "kube-api-access-trr6b") pod "385fce4a-8067-476b-9dda-72f222cfa974" (UID: "385fce4a-8067-476b-9dda-72f222cfa974"). InnerVolumeSpecName "kube-api-access-trr6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.665070 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-scripts" (OuterVolumeSpecName: "scripts") pod "385fce4a-8067-476b-9dda-72f222cfa974" (UID: "385fce4a-8067-476b-9dda-72f222cfa974"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.672985 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "385fce4a-8067-476b-9dda-72f222cfa974" (UID: "385fce4a-8067-476b-9dda-72f222cfa974"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.717303 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "385fce4a-8067-476b-9dda-72f222cfa974" (UID: "385fce4a-8067-476b-9dda-72f222cfa974"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.749606 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-config-data" (OuterVolumeSpecName: "config-data") pod "385fce4a-8067-476b-9dda-72f222cfa974" (UID: "385fce4a-8067-476b-9dda-72f222cfa974"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.759146 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.759176 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.759186 4827 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.759195 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/385fce4a-8067-476b-9dda-72f222cfa974-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.759203 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/385fce4a-8067-476b-9dda-72f222cfa974-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.759212 4827 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/385fce4a-8067-476b-9dda-72f222cfa974-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.759221 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trr6b\" (UniqueName: \"kubernetes.io/projected/385fce4a-8067-476b-9dda-72f222cfa974-kube-api-access-trr6b\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.924819 4827 generic.go:334] "Generic (PLEG): container finished" podID="385fce4a-8067-476b-9dda-72f222cfa974" containerID="60dd04f1c3575ce230d49e0a78ec2fd2edd6351ef1b5811360ea3fe06552a7aa" exitCode=0 Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.924871 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.924837 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"385fce4a-8067-476b-9dda-72f222cfa974","Type":"ContainerDied","Data":"60dd04f1c3575ce230d49e0a78ec2fd2edd6351ef1b5811360ea3fe06552a7aa"} Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.924942 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"385fce4a-8067-476b-9dda-72f222cfa974","Type":"ContainerDied","Data":"00d73d009b582a21d30f1d5e54f33aa93f41e852da293fa772e9df7ec00b8ca1"} Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.924980 4827 scope.go:117] "RemoveContainer" containerID="60dd04f1c3575ce230d49e0a78ec2fd2edd6351ef1b5811360ea3fe06552a7aa" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.924870 4827 generic.go:334] "Generic (PLEG): container finished" podID="385fce4a-8067-476b-9dda-72f222cfa974" containerID="00d73d009b582a21d30f1d5e54f33aa93f41e852da293fa772e9df7ec00b8ca1" exitCode=143 Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.925107 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"385fce4a-8067-476b-9dda-72f222cfa974","Type":"ContainerDied","Data":"69dd047e5b1628251b89d0b4287424bf7be5058aac397a0e449cf1c5e8a7b584"} Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.958436 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.959204 4827 scope.go:117] "RemoveContainer" containerID="00d73d009b582a21d30f1d5e54f33aa93f41e852da293fa772e9df7ec00b8ca1" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.970181 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.981592 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 31 04:36:58 crc kubenswrapper[4827]: E0131 04:36:58.982017 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="385fce4a-8067-476b-9dda-72f222cfa974" containerName="manila-api-log" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.982033 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="385fce4a-8067-476b-9dda-72f222cfa974" containerName="manila-api-log" Jan 31 04:36:58 crc kubenswrapper[4827]: E0131 04:36:58.982064 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="385fce4a-8067-476b-9dda-72f222cfa974" containerName="manila-api" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.982070 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="385fce4a-8067-476b-9dda-72f222cfa974" containerName="manila-api" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.982234 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="385fce4a-8067-476b-9dda-72f222cfa974" containerName="manila-api" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.982254 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="385fce4a-8067-476b-9dda-72f222cfa974" containerName="manila-api-log" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.983298 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.985563 4827 scope.go:117] "RemoveContainer" containerID="60dd04f1c3575ce230d49e0a78ec2fd2edd6351ef1b5811360ea3fe06552a7aa" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.986236 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Jan 31 04:36:58 crc kubenswrapper[4827]: E0131 04:36:58.986407 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60dd04f1c3575ce230d49e0a78ec2fd2edd6351ef1b5811360ea3fe06552a7aa\": container with ID starting with 60dd04f1c3575ce230d49e0a78ec2fd2edd6351ef1b5811360ea3fe06552a7aa not found: ID does not exist" containerID="60dd04f1c3575ce230d49e0a78ec2fd2edd6351ef1b5811360ea3fe06552a7aa" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.986436 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60dd04f1c3575ce230d49e0a78ec2fd2edd6351ef1b5811360ea3fe06552a7aa"} err="failed to get container status \"60dd04f1c3575ce230d49e0a78ec2fd2edd6351ef1b5811360ea3fe06552a7aa\": rpc error: code = NotFound desc = could not find container \"60dd04f1c3575ce230d49e0a78ec2fd2edd6351ef1b5811360ea3fe06552a7aa\": container with ID starting with 60dd04f1c3575ce230d49e0a78ec2fd2edd6351ef1b5811360ea3fe06552a7aa not found: ID does not exist" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.986458 4827 scope.go:117] "RemoveContainer" containerID="00d73d009b582a21d30f1d5e54f33aa93f41e852da293fa772e9df7ec00b8ca1" Jan 31 04:36:58 crc kubenswrapper[4827]: E0131 04:36:58.987002 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00d73d009b582a21d30f1d5e54f33aa93f41e852da293fa772e9df7ec00b8ca1\": container with ID starting with 00d73d009b582a21d30f1d5e54f33aa93f41e852da293fa772e9df7ec00b8ca1 not found: ID does not exist" containerID="00d73d009b582a21d30f1d5e54f33aa93f41e852da293fa772e9df7ec00b8ca1" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.987043 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00d73d009b582a21d30f1d5e54f33aa93f41e852da293fa772e9df7ec00b8ca1"} err="failed to get container status \"00d73d009b582a21d30f1d5e54f33aa93f41e852da293fa772e9df7ec00b8ca1\": rpc error: code = NotFound desc = could not find container \"00d73d009b582a21d30f1d5e54f33aa93f41e852da293fa772e9df7ec00b8ca1\": container with ID starting with 00d73d009b582a21d30f1d5e54f33aa93f41e852da293fa772e9df7ec00b8ca1 not found: ID does not exist" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.987071 4827 scope.go:117] "RemoveContainer" containerID="60dd04f1c3575ce230d49e0a78ec2fd2edd6351ef1b5811360ea3fe06552a7aa" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.987415 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60dd04f1c3575ce230d49e0a78ec2fd2edd6351ef1b5811360ea3fe06552a7aa"} err="failed to get container status \"60dd04f1c3575ce230d49e0a78ec2fd2edd6351ef1b5811360ea3fe06552a7aa\": rpc error: code = NotFound desc = could not find container \"60dd04f1c3575ce230d49e0a78ec2fd2edd6351ef1b5811360ea3fe06552a7aa\": container with ID starting with 60dd04f1c3575ce230d49e0a78ec2fd2edd6351ef1b5811360ea3fe06552a7aa not found: ID does not exist" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.987439 4827 scope.go:117] "RemoveContainer" containerID="00d73d009b582a21d30f1d5e54f33aa93f41e852da293fa772e9df7ec00b8ca1" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.987606 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.987688 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00d73d009b582a21d30f1d5e54f33aa93f41e852da293fa772e9df7ec00b8ca1"} err="failed to get container status \"00d73d009b582a21d30f1d5e54f33aa93f41e852da293fa772e9df7ec00b8ca1\": rpc error: code = NotFound desc = could not find container \"00d73d009b582a21d30f1d5e54f33aa93f41e852da293fa772e9df7ec00b8ca1\": container with ID starting with 00d73d009b582a21d30f1d5e54f33aa93f41e852da293fa772e9df7ec00b8ca1 not found: ID does not exist" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.991374 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 31 04:36:58 crc kubenswrapper[4827]: I0131 04:36:58.994011 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.064032 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-config-data\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.064101 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-logs\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.064474 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-etc-machine-id\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.064587 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-config-data-custom\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.064646 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-scripts\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.064778 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-internal-tls-certs\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.064916 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztjpf\" (UniqueName: \"kubernetes.io/projected/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-kube-api-access-ztjpf\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.064951 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-public-tls-certs\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.065087 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.166230 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-public-tls-certs\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.166269 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.166317 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-config-data\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.166361 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-logs\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.166413 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-etc-machine-id\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.166449 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-config-data-custom\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.166491 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-scripts\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.166512 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-internal-tls-certs\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.166560 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztjpf\" (UniqueName: \"kubernetes.io/projected/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-kube-api-access-ztjpf\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.167010 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-etc-machine-id\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.167458 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-logs\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.170158 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-public-tls-certs\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.170609 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.170874 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-scripts\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.171374 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-internal-tls-certs\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.171939 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-config-data-custom\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.181916 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-config-data\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.187605 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztjpf\" (UniqueName: \"kubernetes.io/projected/1a40e96a-328e-449e-b6a8-39c6f6ed0aa2-kube-api-access-ztjpf\") pod \"manila-api-0\" (UID: \"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2\") " pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.308907 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.311175 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.311575 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" containerName="ceilometer-central-agent" containerID="cri-o://926577518b189fddd66c487d1e2fa4cdc3275b9114ebb9ac938c0d080136e956" gracePeriod=30 Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.311779 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" containerName="proxy-httpd" containerID="cri-o://5c74f3103ea2f6837aa119c413ff6228a7c4193960aa96ff7c2893aea1baf863" gracePeriod=30 Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.312079 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" containerName="ceilometer-notification-agent" containerID="cri-o://ddb5b391690a456c290c273d92aa052f0fb1d9214d761b0c7cc33464d32dac58" gracePeriod=30 Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.312157 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" containerName="sg-core" containerID="cri-o://3b97f953a7dea7bb930a2a1ee9c156ce33276deda51c9ac7dfd6f9693f42f118" gracePeriod=30 Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.941915 4827 generic.go:334] "Generic (PLEG): container finished" podID="a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" containerID="5c74f3103ea2f6837aa119c413ff6228a7c4193960aa96ff7c2893aea1baf863" exitCode=0 Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.942243 4827 generic.go:334] "Generic (PLEG): container finished" podID="a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" containerID="3b97f953a7dea7bb930a2a1ee9c156ce33276deda51c9ac7dfd6f9693f42f118" exitCode=2 Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.942255 4827 generic.go:334] "Generic (PLEG): container finished" podID="a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" containerID="926577518b189fddd66c487d1e2fa4cdc3275b9114ebb9ac938c0d080136e956" exitCode=0 Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.941968 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6","Type":"ContainerDied","Data":"5c74f3103ea2f6837aa119c413ff6228a7c4193960aa96ff7c2893aea1baf863"} Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.942316 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6","Type":"ContainerDied","Data":"3b97f953a7dea7bb930a2a1ee9c156ce33276deda51c9ac7dfd6f9693f42f118"} Jan 31 04:36:59 crc kubenswrapper[4827]: I0131 04:36:59.942333 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6","Type":"ContainerDied","Data":"926577518b189fddd66c487d1e2fa4cdc3275b9114ebb9ac938c0d080136e956"} Jan 31 04:37:00 crc kubenswrapper[4827]: I0131 04:37:00.121646 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="385fce4a-8067-476b-9dda-72f222cfa974" path="/var/lib/kubelet/pods/385fce4a-8067-476b-9dda-72f222cfa974/volumes" Jan 31 04:37:00 crc kubenswrapper[4827]: I0131 04:37:00.956068 4827 generic.go:334] "Generic (PLEG): container finished" podID="a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" containerID="ddb5b391690a456c290c273d92aa052f0fb1d9214d761b0c7cc33464d32dac58" exitCode=0 Jan 31 04:37:00 crc kubenswrapper[4827]: I0131 04:37:00.956119 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6","Type":"ContainerDied","Data":"ddb5b391690a456c290c273d92aa052f0fb1d9214d761b0c7cc33464d32dac58"} Jan 31 04:37:01 crc kubenswrapper[4827]: I0131 04:37:01.993273 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.048522 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-log-httpd\") pod \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.048588 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-scripts\") pod \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.048851 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-config-data\") pod \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.049146 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" (UID: "a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.049458 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-run-httpd\") pod \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.049508 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-ceilometer-tls-certs\") pod \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.049529 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-combined-ca-bundle\") pod \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.049550 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-sg-core-conf-yaml\") pod \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.049570 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfdvn\" (UniqueName: \"kubernetes.io/projected/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-kube-api-access-sfdvn\") pod \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\" (UID: \"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6\") " Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.050171 4827 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.050256 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" (UID: "a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.058148 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-scripts" (OuterVolumeSpecName: "scripts") pod "a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" (UID: "a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.059295 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-kube-api-access-sfdvn" (OuterVolumeSpecName: "kube-api-access-sfdvn") pod "a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" (UID: "a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6"). InnerVolumeSpecName "kube-api-access-sfdvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.089342 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" (UID: "a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.136066 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" (UID: "a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.152190 4827 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.152232 4827 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.152247 4827 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.152259 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfdvn\" (UniqueName: \"kubernetes.io/projected/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-kube-api-access-sfdvn\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.152271 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.177156 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-config-data" (OuterVolumeSpecName: "config-data") pod "a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" (UID: "a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.187909 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" (UID: "a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.257130 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.257166 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.310911 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.978319 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2","Type":"ContainerStarted","Data":"a8a5bf2e312915693f3f5745b4057a6ac88f906beb0d3646ebf80798082a28ff"} Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.978585 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2","Type":"ContainerStarted","Data":"68c3150fdc9cf472878af9fe3389c839e121253293b813891b0416d368538354"} Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.981622 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"324ece2d-b7d4-426f-8bb1-5832b52ee7bd","Type":"ContainerStarted","Data":"b1c3a5c465866424448fd459324ebec24df1ebb3dbaa340ed69cde541d0da42b"} Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.981648 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"324ece2d-b7d4-426f-8bb1-5832b52ee7bd","Type":"ContainerStarted","Data":"40b8bb3ac6448837828f1fbc5eea8e53edda3a6607c541d5779eabf8b4e11e36"} Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.985857 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6","Type":"ContainerDied","Data":"d5da420e92bacb178364b7e23ee7fb1b8a059ee68061729a7330940b940ebd60"} Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.985909 4827 scope.go:117] "RemoveContainer" containerID="5c74f3103ea2f6837aa119c413ff6228a7c4193960aa96ff7c2893aea1baf863" Jan 31 04:37:02 crc kubenswrapper[4827]: I0131 04:37:02.986011 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.004454 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.583687839 podStartE2EDuration="10.004437325s" podCreationTimestamp="2026-01-31 04:36:53 +0000 UTC" firstStartedPulling="2026-01-31 04:36:54.322200399 +0000 UTC m=+3007.009280858" lastFinishedPulling="2026-01-31 04:37:01.742949895 +0000 UTC m=+3014.430030344" observedRunningTime="2026-01-31 04:37:02.996651357 +0000 UTC m=+3015.683731806" watchObservedRunningTime="2026-01-31 04:37:03.004437325 +0000 UTC m=+3015.691517774" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.029184 4827 scope.go:117] "RemoveContainer" containerID="3b97f953a7dea7bb930a2a1ee9c156ce33276deda51c9ac7dfd6f9693f42f118" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.030702 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.049537 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.066807 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:37:03 crc kubenswrapper[4827]: E0131 04:37:03.067240 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" containerName="ceilometer-central-agent" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.067254 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" containerName="ceilometer-central-agent" Jan 31 04:37:03 crc kubenswrapper[4827]: E0131 04:37:03.067275 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" containerName="ceilometer-notification-agent" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.067282 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" containerName="ceilometer-notification-agent" Jan 31 04:37:03 crc kubenswrapper[4827]: E0131 04:37:03.067293 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" containerName="proxy-httpd" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.067299 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" containerName="proxy-httpd" Jan 31 04:37:03 crc kubenswrapper[4827]: E0131 04:37:03.067311 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" containerName="sg-core" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.067318 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" containerName="sg-core" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.067477 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" containerName="sg-core" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.067497 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" containerName="ceilometer-notification-agent" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.067510 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" containerName="proxy-httpd" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.067521 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" containerName="ceilometer-central-agent" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.070460 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.073458 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.073602 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.074206 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.098042 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.101362 4827 scope.go:117] "RemoveContainer" containerID="ddb5b391690a456c290c273d92aa052f0fb1d9214d761b0c7cc33464d32dac58" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.139060 4827 scope.go:117] "RemoveContainer" containerID="926577518b189fddd66c487d1e2fa4cdc3275b9114ebb9ac938c0d080136e956" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.177179 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-config-data\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.177333 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.177560 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.177809 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7f725b5-6af1-406a-8666-bdc720981006-log-httpd\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.178327 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-scripts\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.178556 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7f725b5-6af1-406a-8666-bdc720981006-run-httpd\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.178771 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cvfd\" (UniqueName: \"kubernetes.io/projected/d7f725b5-6af1-406a-8666-bdc720981006-kube-api-access-7cvfd\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.178833 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.280390 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-config-data\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.280430 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.280460 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.280481 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7f725b5-6af1-406a-8666-bdc720981006-log-httpd\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.280500 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-scripts\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.280573 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7f725b5-6af1-406a-8666-bdc720981006-run-httpd\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.280639 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cvfd\" (UniqueName: \"kubernetes.io/projected/d7f725b5-6af1-406a-8666-bdc720981006-kube-api-access-7cvfd\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.280666 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.281452 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7f725b5-6af1-406a-8666-bdc720981006-log-httpd\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.281538 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7f725b5-6af1-406a-8666-bdc720981006-run-httpd\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.286094 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-config-data\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.286098 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-scripts\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.287750 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.288007 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.294416 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.301385 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cvfd\" (UniqueName: \"kubernetes.io/projected/d7f725b5-6af1-406a-8666-bdc720981006-kube-api-access-7cvfd\") pod \"ceilometer-0\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.397839 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.419972 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.549208 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.769001 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.901288 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76b5fdb995-b4n6f" Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.973623 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-pvgtd"] Jan 31 04:37:03 crc kubenswrapper[4827]: I0131 04:37:03.973844 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" podUID="70e687cb-d396-46d3-890a-cd3cbe51186f" containerName="dnsmasq-dns" containerID="cri-o://61482ac2645ba084bdb4df98d0152eea8035ed84ca37867a075defa3823e4b46" gracePeriod=10 Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.015253 4827 generic.go:334] "Generic (PLEG): container finished" podID="e44c9e88-212e-4178-a1fb-ce9b1896d73f" containerID="dc79d9a2d05676b35d0fdd9dd778cc645898bc7bbf98621544d53342e794612d" exitCode=137 Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.015289 4827 generic.go:334] "Generic (PLEG): container finished" podID="e44c9e88-212e-4178-a1fb-ce9b1896d73f" containerID="1cf3d27ab9bc575d5024d55a381eed6f0137bb93a7a6cb8ba372d4d9877e7a42" exitCode=137 Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.015381 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5649566987-dn2gq" event={"ID":"e44c9e88-212e-4178-a1fb-ce9b1896d73f","Type":"ContainerDied","Data":"dc79d9a2d05676b35d0fdd9dd778cc645898bc7bbf98621544d53342e794612d"} Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.015412 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5649566987-dn2gq" event={"ID":"e44c9e88-212e-4178-a1fb-ce9b1896d73f","Type":"ContainerDied","Data":"1cf3d27ab9bc575d5024d55a381eed6f0137bb93a7a6cb8ba372d4d9877e7a42"} Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.033671 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"1a40e96a-328e-449e-b6a8-39c6f6ed0aa2","Type":"ContainerStarted","Data":"091cb17603230229190e9662a27b71197f288f29398ea02e0d7b93807d0d5abf"} Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.034978 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.040251 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7f725b5-6af1-406a-8666-bdc720981006","Type":"ContainerStarted","Data":"1716fa464b1431bd0f56bf5d69f65de2f2909a500067b367ed2221b93e20a0f1"} Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.051719 4827 generic.go:334] "Generic (PLEG): container finished" podID="0a8e3a93-e4a4-41c1-b558-88d28e96ef52" containerID="054cb93dcee9e3aa3135bd5b78636260d1cc82953b362a08e9980573a335bbfe" exitCode=137 Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.051749 4827 generic.go:334] "Generic (PLEG): container finished" podID="0a8e3a93-e4a4-41c1-b558-88d28e96ef52" containerID="982885989dfa2eabcef1f83bdcefbb60ad897f3b61c171664f5faf3d56eb18c4" exitCode=137 Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.054610 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5db647897-mx8gj" event={"ID":"0a8e3a93-e4a4-41c1-b558-88d28e96ef52","Type":"ContainerDied","Data":"054cb93dcee9e3aa3135bd5b78636260d1cc82953b362a08e9980573a335bbfe"} Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.054645 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5db647897-mx8gj" event={"ID":"0a8e3a93-e4a4-41c1-b558-88d28e96ef52","Type":"ContainerDied","Data":"982885989dfa2eabcef1f83bdcefbb60ad897f3b61c171664f5faf3d56eb18c4"} Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.063502 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=6.063481347 podStartE2EDuration="6.063481347s" podCreationTimestamp="2026-01-31 04:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:37:04.057320408 +0000 UTC m=+3016.744400877" watchObservedRunningTime="2026-01-31 04:37:04.063481347 +0000 UTC m=+3016.750561796" Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.124952 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6" path="/var/lib/kubelet/pods/a0cb03cb-1552-4f0e-b99c-2d5188aaf7a6/volumes" Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.132792 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5649566987-dn2gq" Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.200533 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh7zs\" (UniqueName: \"kubernetes.io/projected/e44c9e88-212e-4178-a1fb-ce9b1896d73f-kube-api-access-jh7zs\") pod \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\" (UID: \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\") " Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.200652 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e44c9e88-212e-4178-a1fb-ce9b1896d73f-scripts\") pod \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\" (UID: \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\") " Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.201518 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e44c9e88-212e-4178-a1fb-ce9b1896d73f-horizon-secret-key\") pod \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\" (UID: \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\") " Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.201565 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e44c9e88-212e-4178-a1fb-ce9b1896d73f-logs\") pod \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\" (UID: \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\") " Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.201764 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e44c9e88-212e-4178-a1fb-ce9b1896d73f-config-data\") pod \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\" (UID: \"e44c9e88-212e-4178-a1fb-ce9b1896d73f\") " Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.202223 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e44c9e88-212e-4178-a1fb-ce9b1896d73f-logs" (OuterVolumeSpecName: "logs") pod "e44c9e88-212e-4178-a1fb-ce9b1896d73f" (UID: "e44c9e88-212e-4178-a1fb-ce9b1896d73f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.202472 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e44c9e88-212e-4178-a1fb-ce9b1896d73f-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.205859 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e44c9e88-212e-4178-a1fb-ce9b1896d73f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e44c9e88-212e-4178-a1fb-ce9b1896d73f" (UID: "e44c9e88-212e-4178-a1fb-ce9b1896d73f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.225106 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e44c9e88-212e-4178-a1fb-ce9b1896d73f-kube-api-access-jh7zs" (OuterVolumeSpecName: "kube-api-access-jh7zs") pod "e44c9e88-212e-4178-a1fb-ce9b1896d73f" (UID: "e44c9e88-212e-4178-a1fb-ce9b1896d73f"). InnerVolumeSpecName "kube-api-access-jh7zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.233064 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e44c9e88-212e-4178-a1fb-ce9b1896d73f-config-data" (OuterVolumeSpecName: "config-data") pod "e44c9e88-212e-4178-a1fb-ce9b1896d73f" (UID: "e44c9e88-212e-4178-a1fb-ce9b1896d73f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.243360 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e44c9e88-212e-4178-a1fb-ce9b1896d73f-scripts" (OuterVolumeSpecName: "scripts") pod "e44c9e88-212e-4178-a1fb-ce9b1896d73f" (UID: "e44c9e88-212e-4178-a1fb-ce9b1896d73f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.303429 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e44c9e88-212e-4178-a1fb-ce9b1896d73f-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.303675 4827 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e44c9e88-212e-4178-a1fb-ce9b1896d73f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.303685 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e44c9e88-212e-4178-a1fb-ce9b1896d73f-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.303695 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh7zs\" (UniqueName: \"kubernetes.io/projected/e44c9e88-212e-4178-a1fb-ce9b1896d73f-kube-api-access-jh7zs\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:04 crc kubenswrapper[4827]: I0131 04:37:04.757302 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.063779 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7f725b5-6af1-406a-8666-bdc720981006","Type":"ContainerStarted","Data":"27dc8c9f8821a643091ac59a85e9ff5489a69aadd17ea1b10d9678eac4780ed6"} Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.070618 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5db647897-mx8gj" event={"ID":"0a8e3a93-e4a4-41c1-b558-88d28e96ef52","Type":"ContainerDied","Data":"d92e1b7f7071a81484e3172d2eb5e1022c4780e79485813ee779722112d4feb2"} Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.070655 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d92e1b7f7071a81484e3172d2eb5e1022c4780e79485813ee779722112d4feb2" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.072451 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5649566987-dn2gq" event={"ID":"e44c9e88-212e-4178-a1fb-ce9b1896d73f","Type":"ContainerDied","Data":"6fed205da24e0880d4c321f18a2729e648c53e838feec3b14d7b511574dc5174"} Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.072521 4827 scope.go:117] "RemoveContainer" containerID="dc79d9a2d05676b35d0fdd9dd778cc645898bc7bbf98621544d53342e794612d" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.073760 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5649566987-dn2gq" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.077634 4827 generic.go:334] "Generic (PLEG): container finished" podID="70e687cb-d396-46d3-890a-cd3cbe51186f" containerID="61482ac2645ba084bdb4df98d0152eea8035ed84ca37867a075defa3823e4b46" exitCode=0 Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.078553 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" event={"ID":"70e687cb-d396-46d3-890a-cd3cbe51186f","Type":"ContainerDied","Data":"61482ac2645ba084bdb4df98d0152eea8035ed84ca37867a075defa3823e4b46"} Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.078582 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" event={"ID":"70e687cb-d396-46d3-890a-cd3cbe51186f","Type":"ContainerDied","Data":"43f357d4074c7a9f64a21d17e0aa29a63651631ddf5b0605d65f1351e0fbc26b"} Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.078593 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43f357d4074c7a9f64a21d17e0aa29a63651631ddf5b0605d65f1351e0fbc26b" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.110765 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:37:05 crc kubenswrapper[4827]: E0131 04:37:05.111071 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.119188 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5db647897-mx8gj" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.126819 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.153264 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5649566987-dn2gq"] Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.168241 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5649566987-dn2gq"] Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.256043 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-ovsdbserver-sb\") pod \"70e687cb-d396-46d3-890a-cd3cbe51186f\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.256375 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-config\") pod \"70e687cb-d396-46d3-890a-cd3cbe51186f\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.256443 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-dns-svc\") pod \"70e687cb-d396-46d3-890a-cd3cbe51186f\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.256482 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-scripts\") pod \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\" (UID: \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\") " Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.256805 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-horizon-secret-key\") pod \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\" (UID: \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\") " Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.256870 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-config-data\") pod \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\" (UID: \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\") " Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.256939 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-openstack-edpm-ipam\") pod \"70e687cb-d396-46d3-890a-cd3cbe51186f\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.256961 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtxf5\" (UniqueName: \"kubernetes.io/projected/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-kube-api-access-vtxf5\") pod \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\" (UID: \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\") " Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.256980 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-logs\") pod \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\" (UID: \"0a8e3a93-e4a4-41c1-b558-88d28e96ef52\") " Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.257001 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxk7q\" (UniqueName: \"kubernetes.io/projected/70e687cb-d396-46d3-890a-cd3cbe51186f-kube-api-access-bxk7q\") pod \"70e687cb-d396-46d3-890a-cd3cbe51186f\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.257074 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-ovsdbserver-nb\") pod \"70e687cb-d396-46d3-890a-cd3cbe51186f\" (UID: \"70e687cb-d396-46d3-890a-cd3cbe51186f\") " Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.263364 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-logs" (OuterVolumeSpecName: "logs") pod "0a8e3a93-e4a4-41c1-b558-88d28e96ef52" (UID: "0a8e3a93-e4a4-41c1-b558-88d28e96ef52"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.271030 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "0a8e3a93-e4a4-41c1-b558-88d28e96ef52" (UID: "0a8e3a93-e4a4-41c1-b558-88d28e96ef52"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.271180 4827 scope.go:117] "RemoveContainer" containerID="1cf3d27ab9bc575d5024d55a381eed6f0137bb93a7a6cb8ba372d4d9877e7a42" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.279104 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e687cb-d396-46d3-890a-cd3cbe51186f-kube-api-access-bxk7q" (OuterVolumeSpecName: "kube-api-access-bxk7q") pod "70e687cb-d396-46d3-890a-cd3cbe51186f" (UID: "70e687cb-d396-46d3-890a-cd3cbe51186f"). InnerVolumeSpecName "kube-api-access-bxk7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.285997 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-kube-api-access-vtxf5" (OuterVolumeSpecName: "kube-api-access-vtxf5") pod "0a8e3a93-e4a4-41c1-b558-88d28e96ef52" (UID: "0a8e3a93-e4a4-41c1-b558-88d28e96ef52"). InnerVolumeSpecName "kube-api-access-vtxf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.303649 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-config-data" (OuterVolumeSpecName: "config-data") pod "0a8e3a93-e4a4-41c1-b558-88d28e96ef52" (UID: "0a8e3a93-e4a4-41c1-b558-88d28e96ef52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.305315 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-scripts" (OuterVolumeSpecName: "scripts") pod "0a8e3a93-e4a4-41c1-b558-88d28e96ef52" (UID: "0a8e3a93-e4a4-41c1-b558-88d28e96ef52"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.324599 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-config" (OuterVolumeSpecName: "config") pod "70e687cb-d396-46d3-890a-cd3cbe51186f" (UID: "70e687cb-d396-46d3-890a-cd3cbe51186f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.325636 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "70e687cb-d396-46d3-890a-cd3cbe51186f" (UID: "70e687cb-d396-46d3-890a-cd3cbe51186f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.326848 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "70e687cb-d396-46d3-890a-cd3cbe51186f" (UID: "70e687cb-d396-46d3-890a-cd3cbe51186f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.333533 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "70e687cb-d396-46d3-890a-cd3cbe51186f" (UID: "70e687cb-d396-46d3-890a-cd3cbe51186f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.337739 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "70e687cb-d396-46d3-890a-cd3cbe51186f" (UID: "70e687cb-d396-46d3-890a-cd3cbe51186f"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.358732 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.358758 4827 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.358794 4827 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.358806 4827 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.358815 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.358823 4827 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.358833 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.358841 4827 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/70e687cb-d396-46d3-890a-cd3cbe51186f-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.358851 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtxf5\" (UniqueName: \"kubernetes.io/projected/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-kube-api-access-vtxf5\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.358861 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a8e3a93-e4a4-41c1-b558-88d28e96ef52-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:05 crc kubenswrapper[4827]: I0131 04:37:05.358869 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxk7q\" (UniqueName: \"kubernetes.io/projected/70e687cb-d396-46d3-890a-cd3cbe51186f-kube-api-access-bxk7q\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:06 crc kubenswrapper[4827]: I0131 04:37:06.087678 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-pvgtd" Jan 31 04:37:06 crc kubenswrapper[4827]: I0131 04:37:06.087714 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7f725b5-6af1-406a-8666-bdc720981006","Type":"ContainerStarted","Data":"152846634bc6b97beb4860a571669f482f009ee638e76a9aaff3198ffa7f149b"} Jan 31 04:37:06 crc kubenswrapper[4827]: I0131 04:37:06.087902 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5db647897-mx8gj" Jan 31 04:37:06 crc kubenswrapper[4827]: I0131 04:37:06.121851 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e44c9e88-212e-4178-a1fb-ce9b1896d73f" path="/var/lib/kubelet/pods/e44c9e88-212e-4178-a1fb-ce9b1896d73f/volumes" Jan 31 04:37:06 crc kubenswrapper[4827]: I0131 04:37:06.146756 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-pvgtd"] Jan 31 04:37:06 crc kubenswrapper[4827]: I0131 04:37:06.156153 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-pvgtd"] Jan 31 04:37:06 crc kubenswrapper[4827]: I0131 04:37:06.163932 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5db647897-mx8gj"] Jan 31 04:37:06 crc kubenswrapper[4827]: I0131 04:37:06.172788 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5db647897-mx8gj"] Jan 31 04:37:06 crc kubenswrapper[4827]: I0131 04:37:06.720365 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5dd8c8746d-r25sr" Jan 31 04:37:06 crc kubenswrapper[4827]: I0131 04:37:06.789546 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cdd9cb94b-xgkxs"] Jan 31 04:37:06 crc kubenswrapper[4827]: I0131 04:37:06.789771 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cdd9cb94b-xgkxs" podUID="e277e6d3-f889-425a-abd6-3344f860bfd9" containerName="horizon-log" containerID="cri-o://fa2ea3446ece2961ee86c778ecdec6b7d57be1cf1c402d004e9a43d4bbd78377" gracePeriod=30 Jan 31 04:37:06 crc kubenswrapper[4827]: I0131 04:37:06.790155 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cdd9cb94b-xgkxs" podUID="e277e6d3-f889-425a-abd6-3344f860bfd9" containerName="horizon" containerID="cri-o://5bc769f4481cef184f8dca7f6178fc8af5b40a6a985196e85ae05d7b98c94b1b" gracePeriod=30 Jan 31 04:37:07 crc kubenswrapper[4827]: I0131 04:37:07.101517 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7f725b5-6af1-406a-8666-bdc720981006","Type":"ContainerStarted","Data":"bc15967c2856acfcaa4651e8def340a34fa9f255ab2fcf5c7886444cd6f55f7e"} Jan 31 04:37:08 crc kubenswrapper[4827]: I0131 04:37:08.060767 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:37:08 crc kubenswrapper[4827]: I0131 04:37:08.143685 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a8e3a93-e4a4-41c1-b558-88d28e96ef52" path="/var/lib/kubelet/pods/0a8e3a93-e4a4-41c1-b558-88d28e96ef52/volumes" Jan 31 04:37:08 crc kubenswrapper[4827]: I0131 04:37:08.144353 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70e687cb-d396-46d3-890a-cd3cbe51186f" path="/var/lib/kubelet/pods/70e687cb-d396-46d3-890a-cd3cbe51186f/volumes" Jan 31 04:37:09 crc kubenswrapper[4827]: I0131 04:37:09.124108 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7f725b5-6af1-406a-8666-bdc720981006","Type":"ContainerStarted","Data":"43f9dbf506ce0895c32ae489bc2253e84bc9b42d1a9b5b0bf86a3bce147b8c42"} Jan 31 04:37:09 crc kubenswrapper[4827]: I0131 04:37:09.124579 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 04:37:09 crc kubenswrapper[4827]: I0131 04:37:09.124477 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7f725b5-6af1-406a-8666-bdc720981006" containerName="proxy-httpd" containerID="cri-o://43f9dbf506ce0895c32ae489bc2253e84bc9b42d1a9b5b0bf86a3bce147b8c42" gracePeriod=30 Jan 31 04:37:09 crc kubenswrapper[4827]: I0131 04:37:09.124526 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7f725b5-6af1-406a-8666-bdc720981006" containerName="ceilometer-notification-agent" containerID="cri-o://152846634bc6b97beb4860a571669f482f009ee638e76a9aaff3198ffa7f149b" gracePeriod=30 Jan 31 04:37:09 crc kubenswrapper[4827]: I0131 04:37:09.124520 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7f725b5-6af1-406a-8666-bdc720981006" containerName="sg-core" containerID="cri-o://bc15967c2856acfcaa4651e8def340a34fa9f255ab2fcf5c7886444cd6f55f7e" gracePeriod=30 Jan 31 04:37:09 crc kubenswrapper[4827]: I0131 04:37:09.124252 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7f725b5-6af1-406a-8666-bdc720981006" containerName="ceilometer-central-agent" containerID="cri-o://27dc8c9f8821a643091ac59a85e9ff5489a69aadd17ea1b10d9678eac4780ed6" gracePeriod=30 Jan 31 04:37:10 crc kubenswrapper[4827]: I0131 04:37:10.138033 4827 generic.go:334] "Generic (PLEG): container finished" podID="e277e6d3-f889-425a-abd6-3344f860bfd9" containerID="5bc769f4481cef184f8dca7f6178fc8af5b40a6a985196e85ae05d7b98c94b1b" exitCode=0 Jan 31 04:37:10 crc kubenswrapper[4827]: I0131 04:37:10.138108 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cdd9cb94b-xgkxs" event={"ID":"e277e6d3-f889-425a-abd6-3344f860bfd9","Type":"ContainerDied","Data":"5bc769f4481cef184f8dca7f6178fc8af5b40a6a985196e85ae05d7b98c94b1b"} Jan 31 04:37:10 crc kubenswrapper[4827]: I0131 04:37:10.141375 4827 generic.go:334] "Generic (PLEG): container finished" podID="d7f725b5-6af1-406a-8666-bdc720981006" containerID="bc15967c2856acfcaa4651e8def340a34fa9f255ab2fcf5c7886444cd6f55f7e" exitCode=2 Jan 31 04:37:10 crc kubenswrapper[4827]: I0131 04:37:10.141400 4827 generic.go:334] "Generic (PLEG): container finished" podID="d7f725b5-6af1-406a-8666-bdc720981006" containerID="152846634bc6b97beb4860a571669f482f009ee638e76a9aaff3198ffa7f149b" exitCode=0 Jan 31 04:37:10 crc kubenswrapper[4827]: I0131 04:37:10.141417 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7f725b5-6af1-406a-8666-bdc720981006","Type":"ContainerDied","Data":"bc15967c2856acfcaa4651e8def340a34fa9f255ab2fcf5c7886444cd6f55f7e"} Jan 31 04:37:10 crc kubenswrapper[4827]: I0131 04:37:10.141452 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7f725b5-6af1-406a-8666-bdc720981006","Type":"ContainerDied","Data":"152846634bc6b97beb4860a571669f482f009ee638e76a9aaff3198ffa7f149b"} Jan 31 04:37:11 crc kubenswrapper[4827]: I0131 04:37:11.154405 4827 generic.go:334] "Generic (PLEG): container finished" podID="d7f725b5-6af1-406a-8666-bdc720981006" containerID="27dc8c9f8821a643091ac59a85e9ff5489a69aadd17ea1b10d9678eac4780ed6" exitCode=0 Jan 31 04:37:11 crc kubenswrapper[4827]: I0131 04:37:11.154474 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7f725b5-6af1-406a-8666-bdc720981006","Type":"ContainerDied","Data":"27dc8c9f8821a643091ac59a85e9ff5489a69aadd17ea1b10d9678eac4780ed6"} Jan 31 04:37:12 crc kubenswrapper[4827]: I0131 04:37:12.123278 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cdd9cb94b-xgkxs" podUID="e277e6d3-f889-425a-abd6-3344f860bfd9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Jan 31 04:37:15 crc kubenswrapper[4827]: I0131 04:37:15.010281 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 31 04:37:15 crc kubenswrapper[4827]: I0131 04:37:15.040993 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=7.086424388 podStartE2EDuration="12.040976613s" podCreationTimestamp="2026-01-31 04:37:03 +0000 UTC" firstStartedPulling="2026-01-31 04:37:03.805125935 +0000 UTC m=+3016.492206384" lastFinishedPulling="2026-01-31 04:37:08.75967814 +0000 UTC m=+3021.446758609" observedRunningTime="2026-01-31 04:37:09.151869659 +0000 UTC m=+3021.838950108" watchObservedRunningTime="2026-01-31 04:37:15.040976613 +0000 UTC m=+3027.728057062" Jan 31 04:37:15 crc kubenswrapper[4827]: I0131 04:37:15.104830 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 31 04:37:15 crc kubenswrapper[4827]: I0131 04:37:15.106605 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 04:37:15 crc kubenswrapper[4827]: I0131 04:37:15.182267 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 04:37:15 crc kubenswrapper[4827]: I0131 04:37:15.194402 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="fcc4f548-655b-4c88-a55d-f69acb1d30f1" containerName="manila-scheduler" containerID="cri-o://7799f664949899427d8197934905104485d0cb6b67e38e624c0440d4ddab13e7" gracePeriod=30 Jan 31 04:37:15 crc kubenswrapper[4827]: I0131 04:37:15.194472 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="fcc4f548-655b-4c88-a55d-f69acb1d30f1" containerName="probe" containerID="cri-o://4ae5d4a39c303857274d0fffb65acd4f318f888fb7339395a29e04858052909e" gracePeriod=30 Jan 31 04:37:15 crc kubenswrapper[4827]: I0131 04:37:15.194867 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="324ece2d-b7d4-426f-8bb1-5832b52ee7bd" containerName="manila-share" containerID="cri-o://40b8bb3ac6448837828f1fbc5eea8e53edda3a6607c541d5779eabf8b4e11e36" gracePeriod=30 Jan 31 04:37:15 crc kubenswrapper[4827]: I0131 04:37:15.194924 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="324ece2d-b7d4-426f-8bb1-5832b52ee7bd" containerName="probe" containerID="cri-o://b1c3a5c465866424448fd459324ebec24df1ebb3dbaa340ed69cde541d0da42b" gracePeriod=30 Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.208179 4827 generic.go:334] "Generic (PLEG): container finished" podID="324ece2d-b7d4-426f-8bb1-5832b52ee7bd" containerID="b1c3a5c465866424448fd459324ebec24df1ebb3dbaa340ed69cde541d0da42b" exitCode=0 Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.208639 4827 generic.go:334] "Generic (PLEG): container finished" podID="324ece2d-b7d4-426f-8bb1-5832b52ee7bd" containerID="40b8bb3ac6448837828f1fbc5eea8e53edda3a6607c541d5779eabf8b4e11e36" exitCode=1 Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.208298 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"324ece2d-b7d4-426f-8bb1-5832b52ee7bd","Type":"ContainerDied","Data":"b1c3a5c465866424448fd459324ebec24df1ebb3dbaa340ed69cde541d0da42b"} Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.208739 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"324ece2d-b7d4-426f-8bb1-5832b52ee7bd","Type":"ContainerDied","Data":"40b8bb3ac6448837828f1fbc5eea8e53edda3a6607c541d5779eabf8b4e11e36"} Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.210240 4827 generic.go:334] "Generic (PLEG): container finished" podID="fcc4f548-655b-4c88-a55d-f69acb1d30f1" containerID="4ae5d4a39c303857274d0fffb65acd4f318f888fb7339395a29e04858052909e" exitCode=0 Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.210269 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"fcc4f548-655b-4c88-a55d-f69acb1d30f1","Type":"ContainerDied","Data":"4ae5d4a39c303857274d0fffb65acd4f318f888fb7339395a29e04858052909e"} Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.337958 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.394434 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-config-data\") pod \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.394500 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxcfv\" (UniqueName: \"kubernetes.io/projected/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-kube-api-access-rxcfv\") pod \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.394542 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-combined-ca-bundle\") pod \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.394560 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-var-lib-manila\") pod \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.394596 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-scripts\") pod \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.394644 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-config-data-custom\") pod \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.394662 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-etc-machine-id\") pod \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.394711 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-ceph\") pod \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\" (UID: \"324ece2d-b7d4-426f-8bb1-5832b52ee7bd\") " Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.395740 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "324ece2d-b7d4-426f-8bb1-5832b52ee7bd" (UID: "324ece2d-b7d4-426f-8bb1-5832b52ee7bd"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.399543 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "324ece2d-b7d4-426f-8bb1-5832b52ee7bd" (UID: "324ece2d-b7d4-426f-8bb1-5832b52ee7bd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.402571 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-scripts" (OuterVolumeSpecName: "scripts") pod "324ece2d-b7d4-426f-8bb1-5832b52ee7bd" (UID: "324ece2d-b7d4-426f-8bb1-5832b52ee7bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.403790 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-kube-api-access-rxcfv" (OuterVolumeSpecName: "kube-api-access-rxcfv") pod "324ece2d-b7d4-426f-8bb1-5832b52ee7bd" (UID: "324ece2d-b7d4-426f-8bb1-5832b52ee7bd"). InnerVolumeSpecName "kube-api-access-rxcfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.404020 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-ceph" (OuterVolumeSpecName: "ceph") pod "324ece2d-b7d4-426f-8bb1-5832b52ee7bd" (UID: "324ece2d-b7d4-426f-8bb1-5832b52ee7bd"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.407822 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "324ece2d-b7d4-426f-8bb1-5832b52ee7bd" (UID: "324ece2d-b7d4-426f-8bb1-5832b52ee7bd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.464466 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "324ece2d-b7d4-426f-8bb1-5832b52ee7bd" (UID: "324ece2d-b7d4-426f-8bb1-5832b52ee7bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.494499 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-config-data" (OuterVolumeSpecName: "config-data") pod "324ece2d-b7d4-426f-8bb1-5832b52ee7bd" (UID: "324ece2d-b7d4-426f-8bb1-5832b52ee7bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.497024 4827 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.497067 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.497083 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxcfv\" (UniqueName: \"kubernetes.io/projected/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-kube-api-access-rxcfv\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.497099 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.497111 4827 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-var-lib-manila\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.497122 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.497133 4827 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:16 crc kubenswrapper[4827]: I0131 04:37:16.497144 4827 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/324ece2d-b7d4-426f-8bb1-5832b52ee7bd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.222825 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"324ece2d-b7d4-426f-8bb1-5832b52ee7bd","Type":"ContainerDied","Data":"e96465d154738df1780baeeffa377006ce16daf5dfd0288e6ed57288cdc492ef"} Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.223309 4827 scope.go:117] "RemoveContainer" containerID="b1c3a5c465866424448fd459324ebec24df1ebb3dbaa340ed69cde541d0da42b" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.223200 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.250320 4827 scope.go:117] "RemoveContainer" containerID="40b8bb3ac6448837828f1fbc5eea8e53edda3a6607c541d5779eabf8b4e11e36" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.277168 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.298268 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.309167 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 04:37:17 crc kubenswrapper[4827]: E0131 04:37:17.309556 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324ece2d-b7d4-426f-8bb1-5832b52ee7bd" containerName="manila-share" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.309575 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="324ece2d-b7d4-426f-8bb1-5832b52ee7bd" containerName="manila-share" Jan 31 04:37:17 crc kubenswrapper[4827]: E0131 04:37:17.309601 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e687cb-d396-46d3-890a-cd3cbe51186f" containerName="dnsmasq-dns" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.309609 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e687cb-d396-46d3-890a-cd3cbe51186f" containerName="dnsmasq-dns" Jan 31 04:37:17 crc kubenswrapper[4827]: E0131 04:37:17.309627 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e687cb-d396-46d3-890a-cd3cbe51186f" containerName="init" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.309635 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e687cb-d396-46d3-890a-cd3cbe51186f" containerName="init" Jan 31 04:37:17 crc kubenswrapper[4827]: E0131 04:37:17.309650 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324ece2d-b7d4-426f-8bb1-5832b52ee7bd" containerName="probe" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.309659 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="324ece2d-b7d4-426f-8bb1-5832b52ee7bd" containerName="probe" Jan 31 04:37:17 crc kubenswrapper[4827]: E0131 04:37:17.309675 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e44c9e88-212e-4178-a1fb-ce9b1896d73f" containerName="horizon" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.309684 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e44c9e88-212e-4178-a1fb-ce9b1896d73f" containerName="horizon" Jan 31 04:37:17 crc kubenswrapper[4827]: E0131 04:37:17.309696 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8e3a93-e4a4-41c1-b558-88d28e96ef52" containerName="horizon-log" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.309704 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8e3a93-e4a4-41c1-b558-88d28e96ef52" containerName="horizon-log" Jan 31 04:37:17 crc kubenswrapper[4827]: E0131 04:37:17.309717 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8e3a93-e4a4-41c1-b558-88d28e96ef52" containerName="horizon" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.309724 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8e3a93-e4a4-41c1-b558-88d28e96ef52" containerName="horizon" Jan 31 04:37:17 crc kubenswrapper[4827]: E0131 04:37:17.309736 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e44c9e88-212e-4178-a1fb-ce9b1896d73f" containerName="horizon-log" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.309744 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e44c9e88-212e-4178-a1fb-ce9b1896d73f" containerName="horizon-log" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.309975 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="70e687cb-d396-46d3-890a-cd3cbe51186f" containerName="dnsmasq-dns" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.309988 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a8e3a93-e4a4-41c1-b558-88d28e96ef52" containerName="horizon-log" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.310002 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="324ece2d-b7d4-426f-8bb1-5832b52ee7bd" containerName="manila-share" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.310016 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e44c9e88-212e-4178-a1fb-ce9b1896d73f" containerName="horizon" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.310031 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e44c9e88-212e-4178-a1fb-ce9b1896d73f" containerName="horizon-log" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.310050 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a8e3a93-e4a4-41c1-b558-88d28e96ef52" containerName="horizon" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.310068 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="324ece2d-b7d4-426f-8bb1-5832b52ee7bd" containerName="probe" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.311234 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.314970 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.354984 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 04:37:17 crc kubenswrapper[4827]: E0131 04:37:17.391594 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod324ece2d_b7d4_426f_8bb1_5832b52ee7bd.slice\": RecentStats: unable to find data in memory cache]" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.414910 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/79a2680f-4176-4fe5-9952-c6e74f2c57d6-ceph\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.415022 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8njh5\" (UniqueName: \"kubernetes.io/projected/79a2680f-4176-4fe5-9952-c6e74f2c57d6-kube-api-access-8njh5\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.415044 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79a2680f-4176-4fe5-9952-c6e74f2c57d6-scripts\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.415079 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79a2680f-4176-4fe5-9952-c6e74f2c57d6-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.415107 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79a2680f-4176-4fe5-9952-c6e74f2c57d6-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.415128 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/79a2680f-4176-4fe5-9952-c6e74f2c57d6-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.415208 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79a2680f-4176-4fe5-9952-c6e74f2c57d6-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.415243 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79a2680f-4176-4fe5-9952-c6e74f2c57d6-config-data\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.516628 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8njh5\" (UniqueName: \"kubernetes.io/projected/79a2680f-4176-4fe5-9952-c6e74f2c57d6-kube-api-access-8njh5\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.516686 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79a2680f-4176-4fe5-9952-c6e74f2c57d6-scripts\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.516746 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79a2680f-4176-4fe5-9952-c6e74f2c57d6-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.516784 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79a2680f-4176-4fe5-9952-c6e74f2c57d6-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.516807 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/79a2680f-4176-4fe5-9952-c6e74f2c57d6-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.516900 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79a2680f-4176-4fe5-9952-c6e74f2c57d6-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.516933 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79a2680f-4176-4fe5-9952-c6e74f2c57d6-config-data\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.516963 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/79a2680f-4176-4fe5-9952-c6e74f2c57d6-ceph\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.518278 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79a2680f-4176-4fe5-9952-c6e74f2c57d6-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.518379 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/79a2680f-4176-4fe5-9952-c6e74f2c57d6-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.523454 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79a2680f-4176-4fe5-9952-c6e74f2c57d6-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.524272 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79a2680f-4176-4fe5-9952-c6e74f2c57d6-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.524521 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79a2680f-4176-4fe5-9952-c6e74f2c57d6-scripts\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.524597 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/79a2680f-4176-4fe5-9952-c6e74f2c57d6-ceph\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.527533 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79a2680f-4176-4fe5-9952-c6e74f2c57d6-config-data\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.546670 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8njh5\" (UniqueName: \"kubernetes.io/projected/79a2680f-4176-4fe5-9952-c6e74f2c57d6-kube-api-access-8njh5\") pod \"manila-share-share1-0\" (UID: \"79a2680f-4176-4fe5-9952-c6e74f2c57d6\") " pod="openstack/manila-share-share1-0" Jan 31 04:37:17 crc kubenswrapper[4827]: I0131 04:37:17.657271 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 04:37:18 crc kubenswrapper[4827]: I0131 04:37:18.124159 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="324ece2d-b7d4-426f-8bb1-5832b52ee7bd" path="/var/lib/kubelet/pods/324ece2d-b7d4-426f-8bb1-5832b52ee7bd/volumes" Jan 31 04:37:18 crc kubenswrapper[4827]: I0131 04:37:18.255359 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 04:37:18 crc kubenswrapper[4827]: W0131 04:37:18.261641 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79a2680f_4176_4fe5_9952_c6e74f2c57d6.slice/crio-6d180d696a4fb20ae36207edd2550806f44e11dccdd294432a9d1aacd6ed496b WatchSource:0}: Error finding container 6d180d696a4fb20ae36207edd2550806f44e11dccdd294432a9d1aacd6ed496b: Status 404 returned error can't find the container with id 6d180d696a4fb20ae36207edd2550806f44e11dccdd294432a9d1aacd6ed496b Jan 31 04:37:18 crc kubenswrapper[4827]: I0131 04:37:18.906697 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 04:37:18 crc kubenswrapper[4827]: I0131 04:37:18.949825 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-config-data\") pod \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " Jan 31 04:37:18 crc kubenswrapper[4827]: I0131 04:37:18.949873 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-scripts\") pod \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " Jan 31 04:37:18 crc kubenswrapper[4827]: I0131 04:37:18.949991 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcc4f548-655b-4c88-a55d-f69acb1d30f1-etc-machine-id\") pod \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " Jan 31 04:37:18 crc kubenswrapper[4827]: I0131 04:37:18.950066 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59gxd\" (UniqueName: \"kubernetes.io/projected/fcc4f548-655b-4c88-a55d-f69acb1d30f1-kube-api-access-59gxd\") pod \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " Jan 31 04:37:18 crc kubenswrapper[4827]: I0131 04:37:18.950083 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-config-data-custom\") pod \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " Jan 31 04:37:18 crc kubenswrapper[4827]: I0131 04:37:18.950108 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-combined-ca-bundle\") pod \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\" (UID: \"fcc4f548-655b-4c88-a55d-f69acb1d30f1\") " Jan 31 04:37:18 crc kubenswrapper[4827]: I0131 04:37:18.950901 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcc4f548-655b-4c88-a55d-f69acb1d30f1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fcc4f548-655b-4c88-a55d-f69acb1d30f1" (UID: "fcc4f548-655b-4c88-a55d-f69acb1d30f1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:37:18 crc kubenswrapper[4827]: I0131 04:37:18.977203 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fcc4f548-655b-4c88-a55d-f69acb1d30f1" (UID: "fcc4f548-655b-4c88-a55d-f69acb1d30f1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:18 crc kubenswrapper[4827]: I0131 04:37:18.977532 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-scripts" (OuterVolumeSpecName: "scripts") pod "fcc4f548-655b-4c88-a55d-f69acb1d30f1" (UID: "fcc4f548-655b-4c88-a55d-f69acb1d30f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:18 crc kubenswrapper[4827]: I0131 04:37:18.995987 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc4f548-655b-4c88-a55d-f69acb1d30f1-kube-api-access-59gxd" (OuterVolumeSpecName: "kube-api-access-59gxd") pod "fcc4f548-655b-4c88-a55d-f69acb1d30f1" (UID: "fcc4f548-655b-4c88-a55d-f69acb1d30f1"). InnerVolumeSpecName "kube-api-access-59gxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.052399 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.052425 4827 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcc4f548-655b-4c88-a55d-f69acb1d30f1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.052436 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59gxd\" (UniqueName: \"kubernetes.io/projected/fcc4f548-655b-4c88-a55d-f69acb1d30f1-kube-api-access-59gxd\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.052446 4827 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.057448 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcc4f548-655b-4c88-a55d-f69acb1d30f1" (UID: "fcc4f548-655b-4c88-a55d-f69acb1d30f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.059748 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-config-data" (OuterVolumeSpecName: "config-data") pod "fcc4f548-655b-4c88-a55d-f69acb1d30f1" (UID: "fcc4f548-655b-4c88-a55d-f69acb1d30f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.110234 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:37:19 crc kubenswrapper[4827]: E0131 04:37:19.110435 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.154552 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.154578 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc4f548-655b-4c88-a55d-f69acb1d30f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.243010 4827 generic.go:334] "Generic (PLEG): container finished" podID="fcc4f548-655b-4c88-a55d-f69acb1d30f1" containerID="7799f664949899427d8197934905104485d0cb6b67e38e624c0440d4ddab13e7" exitCode=0 Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.243078 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"fcc4f548-655b-4c88-a55d-f69acb1d30f1","Type":"ContainerDied","Data":"7799f664949899427d8197934905104485d0cb6b67e38e624c0440d4ddab13e7"} Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.243113 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"fcc4f548-655b-4c88-a55d-f69acb1d30f1","Type":"ContainerDied","Data":"487ebd2f399821beb1773f0d40eee32eac5e6d8bca5550b7588f5418314084f4"} Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.243131 4827 scope.go:117] "RemoveContainer" containerID="4ae5d4a39c303857274d0fffb65acd4f318f888fb7339395a29e04858052909e" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.243237 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.250576 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"79a2680f-4176-4fe5-9952-c6e74f2c57d6","Type":"ContainerStarted","Data":"868347c5fc252e8d029d2a2b04fb24e3181ea2ba13c1b4217a83cff3bde832bf"} Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.250618 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"79a2680f-4176-4fe5-9952-c6e74f2c57d6","Type":"ContainerStarted","Data":"60c580a5fd145832a46b36f47a6320e7fd4f3c7251055bf3d277cd46f4228938"} Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.250628 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"79a2680f-4176-4fe5-9952-c6e74f2c57d6","Type":"ContainerStarted","Data":"6d180d696a4fb20ae36207edd2550806f44e11dccdd294432a9d1aacd6ed496b"} Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.279308 4827 scope.go:117] "RemoveContainer" containerID="7799f664949899427d8197934905104485d0cb6b67e38e624c0440d4ddab13e7" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.299275 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.299247876 podStartE2EDuration="2.299247876s" podCreationTimestamp="2026-01-31 04:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:37:19.279983116 +0000 UTC m=+3031.967063615" watchObservedRunningTime="2026-01-31 04:37:19.299247876 +0000 UTC m=+3031.986328335" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.316833 4827 scope.go:117] "RemoveContainer" containerID="4ae5d4a39c303857274d0fffb65acd4f318f888fb7339395a29e04858052909e" Jan 31 04:37:19 crc kubenswrapper[4827]: E0131 04:37:19.318248 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ae5d4a39c303857274d0fffb65acd4f318f888fb7339395a29e04858052909e\": container with ID starting with 4ae5d4a39c303857274d0fffb65acd4f318f888fb7339395a29e04858052909e not found: ID does not exist" containerID="4ae5d4a39c303857274d0fffb65acd4f318f888fb7339395a29e04858052909e" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.318303 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ae5d4a39c303857274d0fffb65acd4f318f888fb7339395a29e04858052909e"} err="failed to get container status \"4ae5d4a39c303857274d0fffb65acd4f318f888fb7339395a29e04858052909e\": rpc error: code = NotFound desc = could not find container \"4ae5d4a39c303857274d0fffb65acd4f318f888fb7339395a29e04858052909e\": container with ID starting with 4ae5d4a39c303857274d0fffb65acd4f318f888fb7339395a29e04858052909e not found: ID does not exist" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.318322 4827 scope.go:117] "RemoveContainer" containerID="7799f664949899427d8197934905104485d0cb6b67e38e624c0440d4ddab13e7" Jan 31 04:37:19 crc kubenswrapper[4827]: E0131 04:37:19.320167 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7799f664949899427d8197934905104485d0cb6b67e38e624c0440d4ddab13e7\": container with ID starting with 7799f664949899427d8197934905104485d0cb6b67e38e624c0440d4ddab13e7 not found: ID does not exist" containerID="7799f664949899427d8197934905104485d0cb6b67e38e624c0440d4ddab13e7" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.320212 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7799f664949899427d8197934905104485d0cb6b67e38e624c0440d4ddab13e7"} err="failed to get container status \"7799f664949899427d8197934905104485d0cb6b67e38e624c0440d4ddab13e7\": rpc error: code = NotFound desc = could not find container \"7799f664949899427d8197934905104485d0cb6b67e38e624c0440d4ddab13e7\": container with ID starting with 7799f664949899427d8197934905104485d0cb6b67e38e624c0440d4ddab13e7 not found: ID does not exist" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.331106 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.344399 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.359428 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 04:37:19 crc kubenswrapper[4827]: E0131 04:37:19.359927 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc4f548-655b-4c88-a55d-f69acb1d30f1" containerName="manila-scheduler" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.359944 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc4f548-655b-4c88-a55d-f69acb1d30f1" containerName="manila-scheduler" Jan 31 04:37:19 crc kubenswrapper[4827]: E0131 04:37:19.359969 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc4f548-655b-4c88-a55d-f69acb1d30f1" containerName="probe" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.359976 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc4f548-655b-4c88-a55d-f69acb1d30f1" containerName="probe" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.360218 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc4f548-655b-4c88-a55d-f69acb1d30f1" containerName="probe" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.360251 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc4f548-655b-4c88-a55d-f69acb1d30f1" containerName="manila-scheduler" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.361493 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.363170 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.377048 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.461256 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv679\" (UniqueName: \"kubernetes.io/projected/5cdfe2e4-b566-4369-9354-42494e23eb46-kube-api-access-mv679\") pod \"manila-scheduler-0\" (UID: \"5cdfe2e4-b566-4369-9354-42494e23eb46\") " pod="openstack/manila-scheduler-0" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.461316 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cdfe2e4-b566-4369-9354-42494e23eb46-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"5cdfe2e4-b566-4369-9354-42494e23eb46\") " pod="openstack/manila-scheduler-0" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.461338 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cdfe2e4-b566-4369-9354-42494e23eb46-scripts\") pod \"manila-scheduler-0\" (UID: \"5cdfe2e4-b566-4369-9354-42494e23eb46\") " pod="openstack/manila-scheduler-0" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.461479 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cdfe2e4-b566-4369-9354-42494e23eb46-config-data\") pod \"manila-scheduler-0\" (UID: \"5cdfe2e4-b566-4369-9354-42494e23eb46\") " pod="openstack/manila-scheduler-0" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.461702 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5cdfe2e4-b566-4369-9354-42494e23eb46-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"5cdfe2e4-b566-4369-9354-42494e23eb46\") " pod="openstack/manila-scheduler-0" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.461932 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5cdfe2e4-b566-4369-9354-42494e23eb46-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"5cdfe2e4-b566-4369-9354-42494e23eb46\") " pod="openstack/manila-scheduler-0" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.563477 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv679\" (UniqueName: \"kubernetes.io/projected/5cdfe2e4-b566-4369-9354-42494e23eb46-kube-api-access-mv679\") pod \"manila-scheduler-0\" (UID: \"5cdfe2e4-b566-4369-9354-42494e23eb46\") " pod="openstack/manila-scheduler-0" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.563549 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cdfe2e4-b566-4369-9354-42494e23eb46-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"5cdfe2e4-b566-4369-9354-42494e23eb46\") " pod="openstack/manila-scheduler-0" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.563570 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cdfe2e4-b566-4369-9354-42494e23eb46-scripts\") pod \"manila-scheduler-0\" (UID: \"5cdfe2e4-b566-4369-9354-42494e23eb46\") " pod="openstack/manila-scheduler-0" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.563610 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cdfe2e4-b566-4369-9354-42494e23eb46-config-data\") pod \"manila-scheduler-0\" (UID: \"5cdfe2e4-b566-4369-9354-42494e23eb46\") " pod="openstack/manila-scheduler-0" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.563642 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5cdfe2e4-b566-4369-9354-42494e23eb46-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"5cdfe2e4-b566-4369-9354-42494e23eb46\") " pod="openstack/manila-scheduler-0" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.563681 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5cdfe2e4-b566-4369-9354-42494e23eb46-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"5cdfe2e4-b566-4369-9354-42494e23eb46\") " pod="openstack/manila-scheduler-0" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.563743 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5cdfe2e4-b566-4369-9354-42494e23eb46-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"5cdfe2e4-b566-4369-9354-42494e23eb46\") " pod="openstack/manila-scheduler-0" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.567798 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cdfe2e4-b566-4369-9354-42494e23eb46-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"5cdfe2e4-b566-4369-9354-42494e23eb46\") " pod="openstack/manila-scheduler-0" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.567842 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cdfe2e4-b566-4369-9354-42494e23eb46-config-data\") pod \"manila-scheduler-0\" (UID: \"5cdfe2e4-b566-4369-9354-42494e23eb46\") " pod="openstack/manila-scheduler-0" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.569352 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cdfe2e4-b566-4369-9354-42494e23eb46-scripts\") pod \"manila-scheduler-0\" (UID: \"5cdfe2e4-b566-4369-9354-42494e23eb46\") " pod="openstack/manila-scheduler-0" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.569566 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5cdfe2e4-b566-4369-9354-42494e23eb46-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"5cdfe2e4-b566-4369-9354-42494e23eb46\") " pod="openstack/manila-scheduler-0" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.580306 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv679\" (UniqueName: \"kubernetes.io/projected/5cdfe2e4-b566-4369-9354-42494e23eb46-kube-api-access-mv679\") pod \"manila-scheduler-0\" (UID: \"5cdfe2e4-b566-4369-9354-42494e23eb46\") " pod="openstack/manila-scheduler-0" Jan 31 04:37:19 crc kubenswrapper[4827]: I0131 04:37:19.675502 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 04:37:20 crc kubenswrapper[4827]: I0131 04:37:20.119355 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcc4f548-655b-4c88-a55d-f69acb1d30f1" path="/var/lib/kubelet/pods/fcc4f548-655b-4c88-a55d-f69acb1d30f1/volumes" Jan 31 04:37:20 crc kubenswrapper[4827]: I0131 04:37:20.301090 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 04:37:20 crc kubenswrapper[4827]: W0131 04:37:20.304767 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cdfe2e4_b566_4369_9354_42494e23eb46.slice/crio-371e07a2b6058e71cff03348199d150baa101bab725998f91bf2676a084ade98 WatchSource:0}: Error finding container 371e07a2b6058e71cff03348199d150baa101bab725998f91bf2676a084ade98: Status 404 returned error can't find the container with id 371e07a2b6058e71cff03348199d150baa101bab725998f91bf2676a084ade98 Jan 31 04:37:20 crc kubenswrapper[4827]: I0131 04:37:20.786651 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 31 04:37:21 crc kubenswrapper[4827]: I0131 04:37:21.270956 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"5cdfe2e4-b566-4369-9354-42494e23eb46","Type":"ContainerStarted","Data":"9c9fed00b77e33ea39213c1fdadd7ca6c123b471e3706552a6eef681bb4ba809"} Jan 31 04:37:21 crc kubenswrapper[4827]: I0131 04:37:21.271324 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"5cdfe2e4-b566-4369-9354-42494e23eb46","Type":"ContainerStarted","Data":"cd89f6a459fc65ceaab521fb567fbce126fa1d583ccee77d171d5b542b2d09c7"} Jan 31 04:37:21 crc kubenswrapper[4827]: I0131 04:37:21.271343 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"5cdfe2e4-b566-4369-9354-42494e23eb46","Type":"ContainerStarted","Data":"371e07a2b6058e71cff03348199d150baa101bab725998f91bf2676a084ade98"} Jan 31 04:37:21 crc kubenswrapper[4827]: I0131 04:37:21.296588 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.296570309 podStartE2EDuration="2.296570309s" podCreationTimestamp="2026-01-31 04:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:37:21.291471713 +0000 UTC m=+3033.978552152" watchObservedRunningTime="2026-01-31 04:37:21.296570309 +0000 UTC m=+3033.983650758" Jan 31 04:37:22 crc kubenswrapper[4827]: I0131 04:37:22.122822 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cdd9cb94b-xgkxs" podUID="e277e6d3-f889-425a-abd6-3344f860bfd9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Jan 31 04:37:27 crc kubenswrapper[4827]: I0131 04:37:27.666349 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 31 04:37:29 crc kubenswrapper[4827]: I0131 04:37:29.675562 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 31 04:37:32 crc kubenswrapper[4827]: I0131 04:37:32.123652 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cdd9cb94b-xgkxs" podUID="e277e6d3-f889-425a-abd6-3344f860bfd9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Jan 31 04:37:32 crc kubenswrapper[4827]: I0131 04:37:32.132989 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:37:33 crc kubenswrapper[4827]: I0131 04:37:33.111291 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:37:33 crc kubenswrapper[4827]: E0131 04:37:33.111803 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:37:33 crc kubenswrapper[4827]: I0131 04:37:33.411732 4827 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d7f725b5-6af1-406a-8666-bdc720981006" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.254644 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.368596 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e277e6d3-f889-425a-abd6-3344f860bfd9-horizon-secret-key\") pod \"e277e6d3-f889-425a-abd6-3344f860bfd9\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.368727 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e277e6d3-f889-425a-abd6-3344f860bfd9-logs\") pod \"e277e6d3-f889-425a-abd6-3344f860bfd9\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.368827 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e277e6d3-f889-425a-abd6-3344f860bfd9-config-data\") pod \"e277e6d3-f889-425a-abd6-3344f860bfd9\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.368934 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-845z9\" (UniqueName: \"kubernetes.io/projected/e277e6d3-f889-425a-abd6-3344f860bfd9-kube-api-access-845z9\") pod \"e277e6d3-f889-425a-abd6-3344f860bfd9\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.368978 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e277e6d3-f889-425a-abd6-3344f860bfd9-horizon-tls-certs\") pod \"e277e6d3-f889-425a-abd6-3344f860bfd9\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.369027 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e277e6d3-f889-425a-abd6-3344f860bfd9-combined-ca-bundle\") pod \"e277e6d3-f889-425a-abd6-3344f860bfd9\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.369114 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e277e6d3-f889-425a-abd6-3344f860bfd9-scripts\") pod \"e277e6d3-f889-425a-abd6-3344f860bfd9\" (UID: \"e277e6d3-f889-425a-abd6-3344f860bfd9\") " Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.369848 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e277e6d3-f889-425a-abd6-3344f860bfd9-logs" (OuterVolumeSpecName: "logs") pod "e277e6d3-f889-425a-abd6-3344f860bfd9" (UID: "e277e6d3-f889-425a-abd6-3344f860bfd9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.374628 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e277e6d3-f889-425a-abd6-3344f860bfd9-kube-api-access-845z9" (OuterVolumeSpecName: "kube-api-access-845z9") pod "e277e6d3-f889-425a-abd6-3344f860bfd9" (UID: "e277e6d3-f889-425a-abd6-3344f860bfd9"). InnerVolumeSpecName "kube-api-access-845z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.374832 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e277e6d3-f889-425a-abd6-3344f860bfd9-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e277e6d3-f889-425a-abd6-3344f860bfd9" (UID: "e277e6d3-f889-425a-abd6-3344f860bfd9"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.397356 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e277e6d3-f889-425a-abd6-3344f860bfd9-scripts" (OuterVolumeSpecName: "scripts") pod "e277e6d3-f889-425a-abd6-3344f860bfd9" (UID: "e277e6d3-f889-425a-abd6-3344f860bfd9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.400747 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e277e6d3-f889-425a-abd6-3344f860bfd9-config-data" (OuterVolumeSpecName: "config-data") pod "e277e6d3-f889-425a-abd6-3344f860bfd9" (UID: "e277e6d3-f889-425a-abd6-3344f860bfd9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.401076 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e277e6d3-f889-425a-abd6-3344f860bfd9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e277e6d3-f889-425a-abd6-3344f860bfd9" (UID: "e277e6d3-f889-425a-abd6-3344f860bfd9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.423032 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e277e6d3-f889-425a-abd6-3344f860bfd9-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "e277e6d3-f889-425a-abd6-3344f860bfd9" (UID: "e277e6d3-f889-425a-abd6-3344f860bfd9"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.471293 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-845z9\" (UniqueName: \"kubernetes.io/projected/e277e6d3-f889-425a-abd6-3344f860bfd9-kube-api-access-845z9\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.471325 4827 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e277e6d3-f889-425a-abd6-3344f860bfd9-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.471337 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e277e6d3-f889-425a-abd6-3344f860bfd9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.471350 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e277e6d3-f889-425a-abd6-3344f860bfd9-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.471362 4827 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e277e6d3-f889-425a-abd6-3344f860bfd9-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.471374 4827 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e277e6d3-f889-425a-abd6-3344f860bfd9-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.471384 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e277e6d3-f889-425a-abd6-3344f860bfd9-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.504762 4827 generic.go:334] "Generic (PLEG): container finished" podID="e277e6d3-f889-425a-abd6-3344f860bfd9" containerID="fa2ea3446ece2961ee86c778ecdec6b7d57be1cf1c402d004e9a43d4bbd78377" exitCode=137 Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.505001 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cdd9cb94b-xgkxs" event={"ID":"e277e6d3-f889-425a-abd6-3344f860bfd9","Type":"ContainerDied","Data":"fa2ea3446ece2961ee86c778ecdec6b7d57be1cf1c402d004e9a43d4bbd78377"} Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.505100 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cdd9cb94b-xgkxs" event={"ID":"e277e6d3-f889-425a-abd6-3344f860bfd9","Type":"ContainerDied","Data":"1a7c8377c57db3e9e78ec3820e800e7c81894e798171b227f8dbfce901970d74"} Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.505182 4827 scope.go:117] "RemoveContainer" containerID="5bc769f4481cef184f8dca7f6178fc8af5b40a6a985196e85ae05d7b98c94b1b" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.505375 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cdd9cb94b-xgkxs" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.548255 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cdd9cb94b-xgkxs"] Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.557028 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-cdd9cb94b-xgkxs"] Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.671166 4827 scope.go:117] "RemoveContainer" containerID="fa2ea3446ece2961ee86c778ecdec6b7d57be1cf1c402d004e9a43d4bbd78377" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.688431 4827 scope.go:117] "RemoveContainer" containerID="5bc769f4481cef184f8dca7f6178fc8af5b40a6a985196e85ae05d7b98c94b1b" Jan 31 04:37:37 crc kubenswrapper[4827]: E0131 04:37:37.688825 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bc769f4481cef184f8dca7f6178fc8af5b40a6a985196e85ae05d7b98c94b1b\": container with ID starting with 5bc769f4481cef184f8dca7f6178fc8af5b40a6a985196e85ae05d7b98c94b1b not found: ID does not exist" containerID="5bc769f4481cef184f8dca7f6178fc8af5b40a6a985196e85ae05d7b98c94b1b" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.688867 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bc769f4481cef184f8dca7f6178fc8af5b40a6a985196e85ae05d7b98c94b1b"} err="failed to get container status \"5bc769f4481cef184f8dca7f6178fc8af5b40a6a985196e85ae05d7b98c94b1b\": rpc error: code = NotFound desc = could not find container \"5bc769f4481cef184f8dca7f6178fc8af5b40a6a985196e85ae05d7b98c94b1b\": container with ID starting with 5bc769f4481cef184f8dca7f6178fc8af5b40a6a985196e85ae05d7b98c94b1b not found: ID does not exist" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.688906 4827 scope.go:117] "RemoveContainer" containerID="fa2ea3446ece2961ee86c778ecdec6b7d57be1cf1c402d004e9a43d4bbd78377" Jan 31 04:37:37 crc kubenswrapper[4827]: E0131 04:37:37.689266 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa2ea3446ece2961ee86c778ecdec6b7d57be1cf1c402d004e9a43d4bbd78377\": container with ID starting with fa2ea3446ece2961ee86c778ecdec6b7d57be1cf1c402d004e9a43d4bbd78377 not found: ID does not exist" containerID="fa2ea3446ece2961ee86c778ecdec6b7d57be1cf1c402d004e9a43d4bbd78377" Jan 31 04:37:37 crc kubenswrapper[4827]: I0131 04:37:37.689295 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa2ea3446ece2961ee86c778ecdec6b7d57be1cf1c402d004e9a43d4bbd78377"} err="failed to get container status \"fa2ea3446ece2961ee86c778ecdec6b7d57be1cf1c402d004e9a43d4bbd78377\": rpc error: code = NotFound desc = could not find container \"fa2ea3446ece2961ee86c778ecdec6b7d57be1cf1c402d004e9a43d4bbd78377\": container with ID starting with fa2ea3446ece2961ee86c778ecdec6b7d57be1cf1c402d004e9a43d4bbd78377 not found: ID does not exist" Jan 31 04:37:38 crc kubenswrapper[4827]: I0131 04:37:38.122875 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e277e6d3-f889-425a-abd6-3344f860bfd9" path="/var/lib/kubelet/pods/e277e6d3-f889-425a-abd6-3344f860bfd9/volumes" Jan 31 04:37:38 crc kubenswrapper[4827]: I0131 04:37:38.998316 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.529079 4827 generic.go:334] "Generic (PLEG): container finished" podID="d7f725b5-6af1-406a-8666-bdc720981006" containerID="43f9dbf506ce0895c32ae489bc2253e84bc9b42d1a9b5b0bf86a3bce147b8c42" exitCode=137 Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.529116 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7f725b5-6af1-406a-8666-bdc720981006","Type":"ContainerDied","Data":"43f9dbf506ce0895c32ae489bc2253e84bc9b42d1a9b5b0bf86a3bce147b8c42"} Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.529477 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7f725b5-6af1-406a-8666-bdc720981006","Type":"ContainerDied","Data":"1716fa464b1431bd0f56bf5d69f65de2f2909a500067b367ed2221b93e20a0f1"} Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.529520 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1716fa464b1431bd0f56bf5d69f65de2f2909a500067b367ed2221b93e20a0f1" Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.570227 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.604978 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-combined-ca-bundle\") pod \"d7f725b5-6af1-406a-8666-bdc720981006\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.605217 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-ceilometer-tls-certs\") pod \"d7f725b5-6af1-406a-8666-bdc720981006\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.605339 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7f725b5-6af1-406a-8666-bdc720981006-log-httpd\") pod \"d7f725b5-6af1-406a-8666-bdc720981006\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.605479 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7f725b5-6af1-406a-8666-bdc720981006-run-httpd\") pod \"d7f725b5-6af1-406a-8666-bdc720981006\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.605716 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-scripts\") pod \"d7f725b5-6af1-406a-8666-bdc720981006\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.605806 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cvfd\" (UniqueName: \"kubernetes.io/projected/d7f725b5-6af1-406a-8666-bdc720981006-kube-api-access-7cvfd\") pod \"d7f725b5-6af1-406a-8666-bdc720981006\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.605930 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-sg-core-conf-yaml\") pod \"d7f725b5-6af1-406a-8666-bdc720981006\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.606055 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-config-data\") pod \"d7f725b5-6af1-406a-8666-bdc720981006\" (UID: \"d7f725b5-6af1-406a-8666-bdc720981006\") " Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.606139 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7f725b5-6af1-406a-8666-bdc720981006-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d7f725b5-6af1-406a-8666-bdc720981006" (UID: "d7f725b5-6af1-406a-8666-bdc720981006"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.606728 4827 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7f725b5-6af1-406a-8666-bdc720981006-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.608800 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7f725b5-6af1-406a-8666-bdc720981006-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d7f725b5-6af1-406a-8666-bdc720981006" (UID: "d7f725b5-6af1-406a-8666-bdc720981006"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.614199 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-scripts" (OuterVolumeSpecName: "scripts") pod "d7f725b5-6af1-406a-8666-bdc720981006" (UID: "d7f725b5-6af1-406a-8666-bdc720981006"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.614352 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7f725b5-6af1-406a-8666-bdc720981006-kube-api-access-7cvfd" (OuterVolumeSpecName: "kube-api-access-7cvfd") pod "d7f725b5-6af1-406a-8666-bdc720981006" (UID: "d7f725b5-6af1-406a-8666-bdc720981006"). InnerVolumeSpecName "kube-api-access-7cvfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.654460 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d7f725b5-6af1-406a-8666-bdc720981006" (UID: "d7f725b5-6af1-406a-8666-bdc720981006"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.700271 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d7f725b5-6af1-406a-8666-bdc720981006" (UID: "d7f725b5-6af1-406a-8666-bdc720981006"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.708308 4827 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.708341 4827 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7f725b5-6af1-406a-8666-bdc720981006-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.708354 4827 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.708366 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cvfd\" (UniqueName: \"kubernetes.io/projected/d7f725b5-6af1-406a-8666-bdc720981006-kube-api-access-7cvfd\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.708379 4827 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.734586 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7f725b5-6af1-406a-8666-bdc720981006" (UID: "d7f725b5-6af1-406a-8666-bdc720981006"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.744285 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-config-data" (OuterVolumeSpecName: "config-data") pod "d7f725b5-6af1-406a-8666-bdc720981006" (UID: "d7f725b5-6af1-406a-8666-bdc720981006"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.809638 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:39 crc kubenswrapper[4827]: I0131 04:37:39.809672 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f725b5-6af1-406a-8666-bdc720981006-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.544212 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.598325 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.615046 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.668338 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:37:40 crc kubenswrapper[4827]: E0131 04:37:40.673840 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f725b5-6af1-406a-8666-bdc720981006" containerName="ceilometer-notification-agent" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.673958 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f725b5-6af1-406a-8666-bdc720981006" containerName="ceilometer-notification-agent" Jan 31 04:37:40 crc kubenswrapper[4827]: E0131 04:37:40.674058 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f725b5-6af1-406a-8666-bdc720981006" containerName="ceilometer-central-agent" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.674127 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f725b5-6af1-406a-8666-bdc720981006" containerName="ceilometer-central-agent" Jan 31 04:37:40 crc kubenswrapper[4827]: E0131 04:37:40.674211 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f725b5-6af1-406a-8666-bdc720981006" containerName="sg-core" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.674279 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f725b5-6af1-406a-8666-bdc720981006" containerName="sg-core" Jan 31 04:37:40 crc kubenswrapper[4827]: E0131 04:37:40.674367 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e277e6d3-f889-425a-abd6-3344f860bfd9" containerName="horizon-log" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.674431 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e277e6d3-f889-425a-abd6-3344f860bfd9" containerName="horizon-log" Jan 31 04:37:40 crc kubenswrapper[4827]: E0131 04:37:40.674513 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e277e6d3-f889-425a-abd6-3344f860bfd9" containerName="horizon" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.674578 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="e277e6d3-f889-425a-abd6-3344f860bfd9" containerName="horizon" Jan 31 04:37:40 crc kubenswrapper[4827]: E0131 04:37:40.674652 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f725b5-6af1-406a-8666-bdc720981006" containerName="proxy-httpd" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.674717 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f725b5-6af1-406a-8666-bdc720981006" containerName="proxy-httpd" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.675066 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e277e6d3-f889-425a-abd6-3344f860bfd9" containerName="horizon" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.675175 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f725b5-6af1-406a-8666-bdc720981006" containerName="ceilometer-central-agent" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.675268 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f725b5-6af1-406a-8666-bdc720981006" containerName="ceilometer-notification-agent" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.675345 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f725b5-6af1-406a-8666-bdc720981006" containerName="sg-core" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.675413 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f725b5-6af1-406a-8666-bdc720981006" containerName="proxy-httpd" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.675479 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="e277e6d3-f889-425a-abd6-3344f860bfd9" containerName="horizon-log" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.677682 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.678200 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.679536 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.679595 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.681075 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.763598 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c177172-a833-49fa-8448-419a0891c926-config-data\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.763638 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c177172-a833-49fa-8448-419a0891c926-run-httpd\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.763669 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c177172-a833-49fa-8448-419a0891c926-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.763689 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz8zf\" (UniqueName: \"kubernetes.io/projected/8c177172-a833-49fa-8448-419a0891c926-kube-api-access-rz8zf\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.763706 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c177172-a833-49fa-8448-419a0891c926-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.763729 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c177172-a833-49fa-8448-419a0891c926-log-httpd\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.763754 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c177172-a833-49fa-8448-419a0891c926-scripts\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.763779 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c177172-a833-49fa-8448-419a0891c926-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.866353 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c177172-a833-49fa-8448-419a0891c926-log-httpd\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.866857 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c177172-a833-49fa-8448-419a0891c926-scripts\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.866944 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c177172-a833-49fa-8448-419a0891c926-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.867201 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c177172-a833-49fa-8448-419a0891c926-config-data\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.867240 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c177172-a833-49fa-8448-419a0891c926-run-httpd\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.867296 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c177172-a833-49fa-8448-419a0891c926-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.867336 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz8zf\" (UniqueName: \"kubernetes.io/projected/8c177172-a833-49fa-8448-419a0891c926-kube-api-access-rz8zf\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.867519 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c177172-a833-49fa-8448-419a0891c926-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.869800 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c177172-a833-49fa-8448-419a0891c926-run-httpd\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.869852 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c177172-a833-49fa-8448-419a0891c926-log-httpd\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.874143 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c177172-a833-49fa-8448-419a0891c926-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.875132 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c177172-a833-49fa-8448-419a0891c926-scripts\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.875267 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c177172-a833-49fa-8448-419a0891c926-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.876009 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c177172-a833-49fa-8448-419a0891c926-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.877766 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c177172-a833-49fa-8448-419a0891c926-config-data\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:40 crc kubenswrapper[4827]: I0131 04:37:40.909122 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz8zf\" (UniqueName: \"kubernetes.io/projected/8c177172-a833-49fa-8448-419a0891c926-kube-api-access-rz8zf\") pod \"ceilometer-0\" (UID: \"8c177172-a833-49fa-8448-419a0891c926\") " pod="openstack/ceilometer-0" Jan 31 04:37:41 crc kubenswrapper[4827]: I0131 04:37:41.003080 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:37:41 crc kubenswrapper[4827]: I0131 04:37:41.217834 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 31 04:37:41 crc kubenswrapper[4827]: I0131 04:37:41.535030 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:37:41 crc kubenswrapper[4827]: W0131 04:37:41.541061 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c177172_a833_49fa_8448_419a0891c926.slice/crio-e196dc02c69c4ae8bdb7f28929db8497ea8688d673c1e903086c112b6bb9065f WatchSource:0}: Error finding container e196dc02c69c4ae8bdb7f28929db8497ea8688d673c1e903086c112b6bb9065f: Status 404 returned error can't find the container with id e196dc02c69c4ae8bdb7f28929db8497ea8688d673c1e903086c112b6bb9065f Jan 31 04:37:41 crc kubenswrapper[4827]: I0131 04:37:41.552212 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c177172-a833-49fa-8448-419a0891c926","Type":"ContainerStarted","Data":"e196dc02c69c4ae8bdb7f28929db8497ea8688d673c1e903086c112b6bb9065f"} Jan 31 04:37:42 crc kubenswrapper[4827]: I0131 04:37:42.125667 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7f725b5-6af1-406a-8666-bdc720981006" path="/var/lib/kubelet/pods/d7f725b5-6af1-406a-8666-bdc720981006/volumes" Jan 31 04:37:42 crc kubenswrapper[4827]: I0131 04:37:42.579705 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c177172-a833-49fa-8448-419a0891c926","Type":"ContainerStarted","Data":"092a67ffb60e5304539fe5b5e45a114c8f341a3a7fc8734ddf9767ceaa264c09"} Jan 31 04:37:43 crc kubenswrapper[4827]: I0131 04:37:43.591593 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c177172-a833-49fa-8448-419a0891c926","Type":"ContainerStarted","Data":"51c74c250e42f5ea67a8419c3d0d04977bab50151ca9ba3088d4a571966370f0"} Jan 31 04:37:43 crc kubenswrapper[4827]: I0131 04:37:43.591892 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c177172-a833-49fa-8448-419a0891c926","Type":"ContainerStarted","Data":"2d32be06865a160c53bc90abc70da2439fec78c79c1a00fcd3418c5e843c7852"} Jan 31 04:37:46 crc kubenswrapper[4827]: I0131 04:37:46.628313 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c177172-a833-49fa-8448-419a0891c926","Type":"ContainerStarted","Data":"127fbac1c4edc7c2c7c8c63dfd3a5195ba50cdede1df78f92a6b2327bc713807"} Jan 31 04:37:46 crc kubenswrapper[4827]: I0131 04:37:46.628993 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 04:37:46 crc kubenswrapper[4827]: I0131 04:37:46.677083 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.583261127 podStartE2EDuration="6.677052843s" podCreationTimestamp="2026-01-31 04:37:40 +0000 UTC" firstStartedPulling="2026-01-31 04:37:41.542906711 +0000 UTC m=+3054.229987160" lastFinishedPulling="2026-01-31 04:37:45.636698417 +0000 UTC m=+3058.323778876" observedRunningTime="2026-01-31 04:37:46.654015976 +0000 UTC m=+3059.341096435" watchObservedRunningTime="2026-01-31 04:37:46.677052843 +0000 UTC m=+3059.364133332" Jan 31 04:37:47 crc kubenswrapper[4827]: I0131 04:37:47.110813 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:37:47 crc kubenswrapper[4827]: E0131 04:37:47.111740 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:37:59 crc kubenswrapper[4827]: I0131 04:37:59.110374 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:37:59 crc kubenswrapper[4827]: E0131 04:37:59.111430 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:38:02 crc kubenswrapper[4827]: I0131 04:38:02.225431 4827 scope.go:117] "RemoveContainer" containerID="8f7e582c0a2017cb35bcae076e01f7eb52d3df0e80785708e5f57fc2038d853f" Jan 31 04:38:02 crc kubenswrapper[4827]: I0131 04:38:02.266168 4827 scope.go:117] "RemoveContainer" containerID="61482ac2645ba084bdb4df98d0152eea8035ed84ca37867a075defa3823e4b46" Jan 31 04:38:11 crc kubenswrapper[4827]: I0131 04:38:11.012558 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 04:38:14 crc kubenswrapper[4827]: I0131 04:38:14.110721 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:38:14 crc kubenswrapper[4827]: E0131 04:38:14.111661 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:38:25 crc kubenswrapper[4827]: I0131 04:38:25.110484 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:38:25 crc kubenswrapper[4827]: E0131 04:38:25.111531 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:38:40 crc kubenswrapper[4827]: I0131 04:38:40.119691 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:38:40 crc kubenswrapper[4827]: E0131 04:38:40.120603 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:38:54 crc kubenswrapper[4827]: I0131 04:38:54.110405 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:38:54 crc kubenswrapper[4827]: E0131 04:38:54.111317 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:39:05 crc kubenswrapper[4827]: I0131 04:39:05.110254 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:39:05 crc kubenswrapper[4827]: E0131 04:39:05.111262 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:39:12 crc kubenswrapper[4827]: I0131 04:39:12.881601 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 31 04:39:12 crc kubenswrapper[4827]: I0131 04:39:12.883145 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 04:39:12 crc kubenswrapper[4827]: I0131 04:39:12.884922 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-4r7pc" Jan 31 04:39:12 crc kubenswrapper[4827]: I0131 04:39:12.885307 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 31 04:39:12 crc kubenswrapper[4827]: I0131 04:39:12.885500 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 31 04:39:12 crc kubenswrapper[4827]: I0131 04:39:12.886172 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 31 04:39:12 crc kubenswrapper[4827]: I0131 04:39:12.895346 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9267ff6a-541b-4297-87e4-fb6095cece6e-config-data\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:12 crc kubenswrapper[4827]: I0131 04:39:12.895383 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9267ff6a-541b-4297-87e4-fb6095cece6e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:12 crc kubenswrapper[4827]: I0131 04:39:12.895414 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9267ff6a-541b-4297-87e4-fb6095cece6e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:12 crc kubenswrapper[4827]: I0131 04:39:12.901663 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 31 04:39:12 crc kubenswrapper[4827]: I0131 04:39:12.997024 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9267ff6a-541b-4297-87e4-fb6095cece6e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:12 crc kubenswrapper[4827]: I0131 04:39:12.997079 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9267ff6a-541b-4297-87e4-fb6095cece6e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:12 crc kubenswrapper[4827]: I0131 04:39:12.997101 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjmxd\" (UniqueName: \"kubernetes.io/projected/9267ff6a-541b-4297-87e4-fb6095cece6e-kube-api-access-zjmxd\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:12 crc kubenswrapper[4827]: I0131 04:39:12.997130 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9267ff6a-541b-4297-87e4-fb6095cece6e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:12 crc kubenswrapper[4827]: I0131 04:39:12.997148 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9267ff6a-541b-4297-87e4-fb6095cece6e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:12 crc kubenswrapper[4827]: I0131 04:39:12.997179 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9267ff6a-541b-4297-87e4-fb6095cece6e-config-data\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:12 crc kubenswrapper[4827]: I0131 04:39:12.997194 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9267ff6a-541b-4297-87e4-fb6095cece6e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:12 crc kubenswrapper[4827]: I0131 04:39:12.997275 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9267ff6a-541b-4297-87e4-fb6095cece6e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:12 crc kubenswrapper[4827]: I0131 04:39:12.997409 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:12 crc kubenswrapper[4827]: I0131 04:39:12.998189 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9267ff6a-541b-4297-87e4-fb6095cece6e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:12 crc kubenswrapper[4827]: I0131 04:39:12.998497 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9267ff6a-541b-4297-87e4-fb6095cece6e-config-data\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:13 crc kubenswrapper[4827]: I0131 04:39:13.005018 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9267ff6a-541b-4297-87e4-fb6095cece6e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:13 crc kubenswrapper[4827]: I0131 04:39:13.098917 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9267ff6a-541b-4297-87e4-fb6095cece6e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:13 crc kubenswrapper[4827]: I0131 04:39:13.099421 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9267ff6a-541b-4297-87e4-fb6095cece6e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:13 crc kubenswrapper[4827]: I0131 04:39:13.099681 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:13 crc kubenswrapper[4827]: I0131 04:39:13.100126 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Jan 31 04:39:13 crc kubenswrapper[4827]: I0131 04:39:13.101000 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9267ff6a-541b-4297-87e4-fb6095cece6e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:13 crc kubenswrapper[4827]: I0131 04:39:13.101103 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9267ff6a-541b-4297-87e4-fb6095cece6e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:13 crc kubenswrapper[4827]: I0131 04:39:13.101188 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjmxd\" (UniqueName: \"kubernetes.io/projected/9267ff6a-541b-4297-87e4-fb6095cece6e-kube-api-access-zjmxd\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:13 crc kubenswrapper[4827]: I0131 04:39:13.101267 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9267ff6a-541b-4297-87e4-fb6095cece6e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:13 crc kubenswrapper[4827]: I0131 04:39:13.101837 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9267ff6a-541b-4297-87e4-fb6095cece6e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:13 crc kubenswrapper[4827]: I0131 04:39:13.105946 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9267ff6a-541b-4297-87e4-fb6095cece6e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:13 crc kubenswrapper[4827]: I0131 04:39:13.108194 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9267ff6a-541b-4297-87e4-fb6095cece6e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:13 crc kubenswrapper[4827]: I0131 04:39:13.124643 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjmxd\" (UniqueName: \"kubernetes.io/projected/9267ff6a-541b-4297-87e4-fb6095cece6e-kube-api-access-zjmxd\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:13 crc kubenswrapper[4827]: I0131 04:39:13.131918 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " pod="openstack/tempest-tests-tempest" Jan 31 04:39:13 crc kubenswrapper[4827]: I0131 04:39:13.197868 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 04:39:13 crc kubenswrapper[4827]: I0131 04:39:13.718830 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 31 04:39:14 crc kubenswrapper[4827]: I0131 04:39:14.536551 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9267ff6a-541b-4297-87e4-fb6095cece6e","Type":"ContainerStarted","Data":"ce46168fe5e203a4b3a2db83b36e71d70b3ec44750ccd81bc7558159b28407f7"} Jan 31 04:39:18 crc kubenswrapper[4827]: I0131 04:39:18.120091 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:39:18 crc kubenswrapper[4827]: E0131 04:39:18.120921 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:39:32 crc kubenswrapper[4827]: I0131 04:39:32.110138 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:39:32 crc kubenswrapper[4827]: E0131 04:39:32.111060 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:39:44 crc kubenswrapper[4827]: I0131 04:39:44.110326 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:39:44 crc kubenswrapper[4827]: E0131 04:39:44.111166 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:39:47 crc kubenswrapper[4827]: E0131 04:39:47.905677 4827 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 31 04:39:47 crc kubenswrapper[4827]: E0131 04:39:47.906466 4827 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zjmxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(9267ff6a-541b-4297-87e4-fb6095cece6e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:39:47 crc kubenswrapper[4827]: E0131 04:39:47.908448 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="9267ff6a-541b-4297-87e4-fb6095cece6e" Jan 31 04:39:47 crc kubenswrapper[4827]: E0131 04:39:47.925416 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="9267ff6a-541b-4297-87e4-fb6095cece6e" Jan 31 04:39:57 crc kubenswrapper[4827]: I0131 04:39:57.109743 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:39:57 crc kubenswrapper[4827]: E0131 04:39:57.110546 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:39:58 crc kubenswrapper[4827]: I0131 04:39:58.557738 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 31 04:40:00 crc kubenswrapper[4827]: I0131 04:40:00.060490 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9267ff6a-541b-4297-87e4-fb6095cece6e","Type":"ContainerStarted","Data":"c51427307b06241a749e79ddc293fd458cd79d2348a99dd24788debc4639ba3f"} Jan 31 04:40:00 crc kubenswrapper[4827]: I0131 04:40:00.097507 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.265629015 podStartE2EDuration="49.097484889s" podCreationTimestamp="2026-01-31 04:39:11 +0000 UTC" firstStartedPulling="2026-01-31 04:39:13.722723438 +0000 UTC m=+3146.409803907" lastFinishedPulling="2026-01-31 04:39:58.554579332 +0000 UTC m=+3191.241659781" observedRunningTime="2026-01-31 04:40:00.083562786 +0000 UTC m=+3192.770643265" watchObservedRunningTime="2026-01-31 04:40:00.097484889 +0000 UTC m=+3192.784565348" Jan 31 04:40:10 crc kubenswrapper[4827]: I0131 04:40:10.112044 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:40:10 crc kubenswrapper[4827]: E0131 04:40:10.113066 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:40:25 crc kubenswrapper[4827]: I0131 04:40:25.110853 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:40:25 crc kubenswrapper[4827]: E0131 04:40:25.111936 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:40:38 crc kubenswrapper[4827]: I0131 04:40:38.144546 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:40:38 crc kubenswrapper[4827]: E0131 04:40:38.147638 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:40:40 crc kubenswrapper[4827]: I0131 04:40:40.664476 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ccqpw"] Jan 31 04:40:40 crc kubenswrapper[4827]: I0131 04:40:40.667775 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ccqpw" Jan 31 04:40:40 crc kubenswrapper[4827]: I0131 04:40:40.706565 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ccqpw"] Jan 31 04:40:40 crc kubenswrapper[4827]: I0131 04:40:40.821682 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f37e97e-e149-4456-88bc-94dacf35cbe8-utilities\") pod \"community-operators-ccqpw\" (UID: \"9f37e97e-e149-4456-88bc-94dacf35cbe8\") " pod="openshift-marketplace/community-operators-ccqpw" Jan 31 04:40:40 crc kubenswrapper[4827]: I0131 04:40:40.821906 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbssh\" (UniqueName: \"kubernetes.io/projected/9f37e97e-e149-4456-88bc-94dacf35cbe8-kube-api-access-gbssh\") pod \"community-operators-ccqpw\" (UID: \"9f37e97e-e149-4456-88bc-94dacf35cbe8\") " pod="openshift-marketplace/community-operators-ccqpw" Jan 31 04:40:40 crc kubenswrapper[4827]: I0131 04:40:40.821975 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f37e97e-e149-4456-88bc-94dacf35cbe8-catalog-content\") pod \"community-operators-ccqpw\" (UID: \"9f37e97e-e149-4456-88bc-94dacf35cbe8\") " pod="openshift-marketplace/community-operators-ccqpw" Jan 31 04:40:40 crc kubenswrapper[4827]: I0131 04:40:40.924382 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbssh\" (UniqueName: \"kubernetes.io/projected/9f37e97e-e149-4456-88bc-94dacf35cbe8-kube-api-access-gbssh\") pod \"community-operators-ccqpw\" (UID: \"9f37e97e-e149-4456-88bc-94dacf35cbe8\") " pod="openshift-marketplace/community-operators-ccqpw" Jan 31 04:40:40 crc kubenswrapper[4827]: I0131 04:40:40.924487 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f37e97e-e149-4456-88bc-94dacf35cbe8-catalog-content\") pod \"community-operators-ccqpw\" (UID: \"9f37e97e-e149-4456-88bc-94dacf35cbe8\") " pod="openshift-marketplace/community-operators-ccqpw" Jan 31 04:40:40 crc kubenswrapper[4827]: I0131 04:40:40.924567 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f37e97e-e149-4456-88bc-94dacf35cbe8-utilities\") pod \"community-operators-ccqpw\" (UID: \"9f37e97e-e149-4456-88bc-94dacf35cbe8\") " pod="openshift-marketplace/community-operators-ccqpw" Jan 31 04:40:40 crc kubenswrapper[4827]: I0131 04:40:40.925020 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f37e97e-e149-4456-88bc-94dacf35cbe8-utilities\") pod \"community-operators-ccqpw\" (UID: \"9f37e97e-e149-4456-88bc-94dacf35cbe8\") " pod="openshift-marketplace/community-operators-ccqpw" Jan 31 04:40:40 crc kubenswrapper[4827]: I0131 04:40:40.925021 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f37e97e-e149-4456-88bc-94dacf35cbe8-catalog-content\") pod \"community-operators-ccqpw\" (UID: \"9f37e97e-e149-4456-88bc-94dacf35cbe8\") " pod="openshift-marketplace/community-operators-ccqpw" Jan 31 04:40:40 crc kubenswrapper[4827]: I0131 04:40:40.963871 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbssh\" (UniqueName: \"kubernetes.io/projected/9f37e97e-e149-4456-88bc-94dacf35cbe8-kube-api-access-gbssh\") pod \"community-operators-ccqpw\" (UID: \"9f37e97e-e149-4456-88bc-94dacf35cbe8\") " pod="openshift-marketplace/community-operators-ccqpw" Jan 31 04:40:41 crc kubenswrapper[4827]: I0131 04:40:41.029390 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ccqpw" Jan 31 04:40:41 crc kubenswrapper[4827]: W0131 04:40:41.539259 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f37e97e_e149_4456_88bc_94dacf35cbe8.slice/crio-2b0daac01e8234e35432c0983adc8aa80346e60c7264d2086beb777758648406 WatchSource:0}: Error finding container 2b0daac01e8234e35432c0983adc8aa80346e60c7264d2086beb777758648406: Status 404 returned error can't find the container with id 2b0daac01e8234e35432c0983adc8aa80346e60c7264d2086beb777758648406 Jan 31 04:40:41 crc kubenswrapper[4827]: I0131 04:40:41.542916 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ccqpw"] Jan 31 04:40:41 crc kubenswrapper[4827]: I0131 04:40:41.557227 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccqpw" event={"ID":"9f37e97e-e149-4456-88bc-94dacf35cbe8","Type":"ContainerStarted","Data":"2b0daac01e8234e35432c0983adc8aa80346e60c7264d2086beb777758648406"} Jan 31 04:40:42 crc kubenswrapper[4827]: I0131 04:40:42.569851 4827 generic.go:334] "Generic (PLEG): container finished" podID="9f37e97e-e149-4456-88bc-94dacf35cbe8" containerID="edaf6e01f8b700e017ff5c1d657d2c022e58c29b619553f07b826bf0ec2584f2" exitCode=0 Jan 31 04:40:42 crc kubenswrapper[4827]: I0131 04:40:42.569927 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccqpw" event={"ID":"9f37e97e-e149-4456-88bc-94dacf35cbe8","Type":"ContainerDied","Data":"edaf6e01f8b700e017ff5c1d657d2c022e58c29b619553f07b826bf0ec2584f2"} Jan 31 04:40:43 crc kubenswrapper[4827]: I0131 04:40:43.583791 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccqpw" event={"ID":"9f37e97e-e149-4456-88bc-94dacf35cbe8","Type":"ContainerStarted","Data":"3f2118b3113cbe34d12aef3d288dec99cbdf572ea9920e2d01bc3d22fbeebc1b"} Jan 31 04:40:44 crc kubenswrapper[4827]: I0131 04:40:44.596221 4827 generic.go:334] "Generic (PLEG): container finished" podID="9f37e97e-e149-4456-88bc-94dacf35cbe8" containerID="3f2118b3113cbe34d12aef3d288dec99cbdf572ea9920e2d01bc3d22fbeebc1b" exitCode=0 Jan 31 04:40:44 crc kubenswrapper[4827]: I0131 04:40:44.596267 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccqpw" event={"ID":"9f37e97e-e149-4456-88bc-94dacf35cbe8","Type":"ContainerDied","Data":"3f2118b3113cbe34d12aef3d288dec99cbdf572ea9920e2d01bc3d22fbeebc1b"} Jan 31 04:40:45 crc kubenswrapper[4827]: I0131 04:40:45.609650 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccqpw" event={"ID":"9f37e97e-e149-4456-88bc-94dacf35cbe8","Type":"ContainerStarted","Data":"e61d3829968f8f3def5fedd9394e9f160690cf40b308962665266b157891e5e7"} Jan 31 04:40:51 crc kubenswrapper[4827]: I0131 04:40:51.029819 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ccqpw" Jan 31 04:40:51 crc kubenswrapper[4827]: I0131 04:40:51.031072 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ccqpw" Jan 31 04:40:51 crc kubenswrapper[4827]: I0131 04:40:51.085205 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ccqpw" Jan 31 04:40:51 crc kubenswrapper[4827]: I0131 04:40:51.106141 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ccqpw" podStartSLOduration=8.61940862 podStartE2EDuration="11.106120734s" podCreationTimestamp="2026-01-31 04:40:40 +0000 UTC" firstStartedPulling="2026-01-31 04:40:42.575725283 +0000 UTC m=+3235.262805762" lastFinishedPulling="2026-01-31 04:40:45.062437417 +0000 UTC m=+3237.749517876" observedRunningTime="2026-01-31 04:40:45.654149005 +0000 UTC m=+3238.341229474" watchObservedRunningTime="2026-01-31 04:40:51.106120734 +0000 UTC m=+3243.793201193" Jan 31 04:40:51 crc kubenswrapper[4827]: I0131 04:40:51.748721 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ccqpw" Jan 31 04:40:51 crc kubenswrapper[4827]: I0131 04:40:51.813084 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ccqpw"] Jan 31 04:40:52 crc kubenswrapper[4827]: I0131 04:40:52.110048 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:40:52 crc kubenswrapper[4827]: I0131 04:40:52.685787 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"7954efcbb2345b683530b21570dd478327d15f837551cf0700c83e0134e75fef"} Jan 31 04:40:53 crc kubenswrapper[4827]: I0131 04:40:53.695059 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ccqpw" podUID="9f37e97e-e149-4456-88bc-94dacf35cbe8" containerName="registry-server" containerID="cri-o://e61d3829968f8f3def5fedd9394e9f160690cf40b308962665266b157891e5e7" gracePeriod=2 Jan 31 04:40:54 crc kubenswrapper[4827]: I0131 04:40:54.234802 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ccqpw" Jan 31 04:40:54 crc kubenswrapper[4827]: I0131 04:40:54.338718 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbssh\" (UniqueName: \"kubernetes.io/projected/9f37e97e-e149-4456-88bc-94dacf35cbe8-kube-api-access-gbssh\") pod \"9f37e97e-e149-4456-88bc-94dacf35cbe8\" (UID: \"9f37e97e-e149-4456-88bc-94dacf35cbe8\") " Jan 31 04:40:54 crc kubenswrapper[4827]: I0131 04:40:54.338932 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f37e97e-e149-4456-88bc-94dacf35cbe8-utilities\") pod \"9f37e97e-e149-4456-88bc-94dacf35cbe8\" (UID: \"9f37e97e-e149-4456-88bc-94dacf35cbe8\") " Jan 31 04:40:54 crc kubenswrapper[4827]: I0131 04:40:54.338981 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f37e97e-e149-4456-88bc-94dacf35cbe8-catalog-content\") pod \"9f37e97e-e149-4456-88bc-94dacf35cbe8\" (UID: \"9f37e97e-e149-4456-88bc-94dacf35cbe8\") " Jan 31 04:40:54 crc kubenswrapper[4827]: I0131 04:40:54.339994 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f37e97e-e149-4456-88bc-94dacf35cbe8-utilities" (OuterVolumeSpecName: "utilities") pod "9f37e97e-e149-4456-88bc-94dacf35cbe8" (UID: "9f37e97e-e149-4456-88bc-94dacf35cbe8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:40:54 crc kubenswrapper[4827]: I0131 04:40:54.344637 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f37e97e-e149-4456-88bc-94dacf35cbe8-kube-api-access-gbssh" (OuterVolumeSpecName: "kube-api-access-gbssh") pod "9f37e97e-e149-4456-88bc-94dacf35cbe8" (UID: "9f37e97e-e149-4456-88bc-94dacf35cbe8"). InnerVolumeSpecName "kube-api-access-gbssh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:40:54 crc kubenswrapper[4827]: I0131 04:40:54.443921 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbssh\" (UniqueName: \"kubernetes.io/projected/9f37e97e-e149-4456-88bc-94dacf35cbe8-kube-api-access-gbssh\") on node \"crc\" DevicePath \"\"" Jan 31 04:40:54 crc kubenswrapper[4827]: I0131 04:40:54.443987 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f37e97e-e149-4456-88bc-94dacf35cbe8-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:40:54 crc kubenswrapper[4827]: I0131 04:40:54.710919 4827 generic.go:334] "Generic (PLEG): container finished" podID="9f37e97e-e149-4456-88bc-94dacf35cbe8" containerID="e61d3829968f8f3def5fedd9394e9f160690cf40b308962665266b157891e5e7" exitCode=0 Jan 31 04:40:54 crc kubenswrapper[4827]: I0131 04:40:54.710970 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccqpw" event={"ID":"9f37e97e-e149-4456-88bc-94dacf35cbe8","Type":"ContainerDied","Data":"e61d3829968f8f3def5fedd9394e9f160690cf40b308962665266b157891e5e7"} Jan 31 04:40:54 crc kubenswrapper[4827]: I0131 04:40:54.711011 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccqpw" event={"ID":"9f37e97e-e149-4456-88bc-94dacf35cbe8","Type":"ContainerDied","Data":"2b0daac01e8234e35432c0983adc8aa80346e60c7264d2086beb777758648406"} Jan 31 04:40:54 crc kubenswrapper[4827]: I0131 04:40:54.711061 4827 scope.go:117] "RemoveContainer" containerID="e61d3829968f8f3def5fedd9394e9f160690cf40b308962665266b157891e5e7" Jan 31 04:40:54 crc kubenswrapper[4827]: I0131 04:40:54.712955 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ccqpw" Jan 31 04:40:54 crc kubenswrapper[4827]: I0131 04:40:54.749280 4827 scope.go:117] "RemoveContainer" containerID="3f2118b3113cbe34d12aef3d288dec99cbdf572ea9920e2d01bc3d22fbeebc1b" Jan 31 04:40:54 crc kubenswrapper[4827]: I0131 04:40:54.787197 4827 scope.go:117] "RemoveContainer" containerID="edaf6e01f8b700e017ff5c1d657d2c022e58c29b619553f07b826bf0ec2584f2" Jan 31 04:40:54 crc kubenswrapper[4827]: I0131 04:40:54.834346 4827 scope.go:117] "RemoveContainer" containerID="e61d3829968f8f3def5fedd9394e9f160690cf40b308962665266b157891e5e7" Jan 31 04:40:54 crc kubenswrapper[4827]: E0131 04:40:54.834974 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e61d3829968f8f3def5fedd9394e9f160690cf40b308962665266b157891e5e7\": container with ID starting with e61d3829968f8f3def5fedd9394e9f160690cf40b308962665266b157891e5e7 not found: ID does not exist" containerID="e61d3829968f8f3def5fedd9394e9f160690cf40b308962665266b157891e5e7" Jan 31 04:40:54 crc kubenswrapper[4827]: I0131 04:40:54.835145 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e61d3829968f8f3def5fedd9394e9f160690cf40b308962665266b157891e5e7"} err="failed to get container status \"e61d3829968f8f3def5fedd9394e9f160690cf40b308962665266b157891e5e7\": rpc error: code = NotFound desc = could not find container \"e61d3829968f8f3def5fedd9394e9f160690cf40b308962665266b157891e5e7\": container with ID starting with e61d3829968f8f3def5fedd9394e9f160690cf40b308962665266b157891e5e7 not found: ID does not exist" Jan 31 04:40:54 crc kubenswrapper[4827]: I0131 04:40:54.835315 4827 scope.go:117] "RemoveContainer" containerID="3f2118b3113cbe34d12aef3d288dec99cbdf572ea9920e2d01bc3d22fbeebc1b" Jan 31 04:40:54 crc kubenswrapper[4827]: E0131 04:40:54.835986 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f2118b3113cbe34d12aef3d288dec99cbdf572ea9920e2d01bc3d22fbeebc1b\": container with ID starting with 3f2118b3113cbe34d12aef3d288dec99cbdf572ea9920e2d01bc3d22fbeebc1b not found: ID does not exist" containerID="3f2118b3113cbe34d12aef3d288dec99cbdf572ea9920e2d01bc3d22fbeebc1b" Jan 31 04:40:54 crc kubenswrapper[4827]: I0131 04:40:54.836061 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f2118b3113cbe34d12aef3d288dec99cbdf572ea9920e2d01bc3d22fbeebc1b"} err="failed to get container status \"3f2118b3113cbe34d12aef3d288dec99cbdf572ea9920e2d01bc3d22fbeebc1b\": rpc error: code = NotFound desc = could not find container \"3f2118b3113cbe34d12aef3d288dec99cbdf572ea9920e2d01bc3d22fbeebc1b\": container with ID starting with 3f2118b3113cbe34d12aef3d288dec99cbdf572ea9920e2d01bc3d22fbeebc1b not found: ID does not exist" Jan 31 04:40:54 crc kubenswrapper[4827]: I0131 04:40:54.836108 4827 scope.go:117] "RemoveContainer" containerID="edaf6e01f8b700e017ff5c1d657d2c022e58c29b619553f07b826bf0ec2584f2" Jan 31 04:40:54 crc kubenswrapper[4827]: E0131 04:40:54.836757 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edaf6e01f8b700e017ff5c1d657d2c022e58c29b619553f07b826bf0ec2584f2\": container with ID starting with edaf6e01f8b700e017ff5c1d657d2c022e58c29b619553f07b826bf0ec2584f2 not found: ID does not exist" containerID="edaf6e01f8b700e017ff5c1d657d2c022e58c29b619553f07b826bf0ec2584f2" Jan 31 04:40:54 crc kubenswrapper[4827]: I0131 04:40:54.836946 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edaf6e01f8b700e017ff5c1d657d2c022e58c29b619553f07b826bf0ec2584f2"} err="failed to get container status \"edaf6e01f8b700e017ff5c1d657d2c022e58c29b619553f07b826bf0ec2584f2\": rpc error: code = NotFound desc = could not find container \"edaf6e01f8b700e017ff5c1d657d2c022e58c29b619553f07b826bf0ec2584f2\": container with ID starting with edaf6e01f8b700e017ff5c1d657d2c022e58c29b619553f07b826bf0ec2584f2 not found: ID does not exist" Jan 31 04:40:55 crc kubenswrapper[4827]: I0131 04:40:55.439324 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f37e97e-e149-4456-88bc-94dacf35cbe8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f37e97e-e149-4456-88bc-94dacf35cbe8" (UID: "9f37e97e-e149-4456-88bc-94dacf35cbe8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:40:55 crc kubenswrapper[4827]: I0131 04:40:55.463516 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f37e97e-e149-4456-88bc-94dacf35cbe8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:40:55 crc kubenswrapper[4827]: I0131 04:40:55.662085 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ccqpw"] Jan 31 04:40:55 crc kubenswrapper[4827]: I0131 04:40:55.676058 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ccqpw"] Jan 31 04:40:56 crc kubenswrapper[4827]: I0131 04:40:56.126141 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f37e97e-e149-4456-88bc-94dacf35cbe8" path="/var/lib/kubelet/pods/9f37e97e-e149-4456-88bc-94dacf35cbe8/volumes" Jan 31 04:42:25 crc kubenswrapper[4827]: I0131 04:42:25.019693 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-842np"] Jan 31 04:42:25 crc kubenswrapper[4827]: E0131 04:42:25.020704 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f37e97e-e149-4456-88bc-94dacf35cbe8" containerName="extract-utilities" Jan 31 04:42:25 crc kubenswrapper[4827]: I0131 04:42:25.020721 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f37e97e-e149-4456-88bc-94dacf35cbe8" containerName="extract-utilities" Jan 31 04:42:25 crc kubenswrapper[4827]: E0131 04:42:25.020745 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f37e97e-e149-4456-88bc-94dacf35cbe8" containerName="extract-content" Jan 31 04:42:25 crc kubenswrapper[4827]: I0131 04:42:25.020755 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f37e97e-e149-4456-88bc-94dacf35cbe8" containerName="extract-content" Jan 31 04:42:25 crc kubenswrapper[4827]: E0131 04:42:25.020770 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f37e97e-e149-4456-88bc-94dacf35cbe8" containerName="registry-server" Jan 31 04:42:25 crc kubenswrapper[4827]: I0131 04:42:25.020781 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f37e97e-e149-4456-88bc-94dacf35cbe8" containerName="registry-server" Jan 31 04:42:25 crc kubenswrapper[4827]: I0131 04:42:25.021072 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f37e97e-e149-4456-88bc-94dacf35cbe8" containerName="registry-server" Jan 31 04:42:25 crc kubenswrapper[4827]: I0131 04:42:25.022638 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-842np" Jan 31 04:42:25 crc kubenswrapper[4827]: I0131 04:42:25.040177 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-842np"] Jan 31 04:42:25 crc kubenswrapper[4827]: I0131 04:42:25.064455 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dafde82-f1b0-425b-8989-ff9c7df7de1a-utilities\") pod \"redhat-operators-842np\" (UID: \"6dafde82-f1b0-425b-8989-ff9c7df7de1a\") " pod="openshift-marketplace/redhat-operators-842np" Jan 31 04:42:25 crc kubenswrapper[4827]: I0131 04:42:25.064812 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcxw7\" (UniqueName: \"kubernetes.io/projected/6dafde82-f1b0-425b-8989-ff9c7df7de1a-kube-api-access-qcxw7\") pod \"redhat-operators-842np\" (UID: \"6dafde82-f1b0-425b-8989-ff9c7df7de1a\") " pod="openshift-marketplace/redhat-operators-842np" Jan 31 04:42:25 crc kubenswrapper[4827]: I0131 04:42:25.069685 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dafde82-f1b0-425b-8989-ff9c7df7de1a-catalog-content\") pod \"redhat-operators-842np\" (UID: \"6dafde82-f1b0-425b-8989-ff9c7df7de1a\") " pod="openshift-marketplace/redhat-operators-842np" Jan 31 04:42:25 crc kubenswrapper[4827]: I0131 04:42:25.197063 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxw7\" (UniqueName: \"kubernetes.io/projected/6dafde82-f1b0-425b-8989-ff9c7df7de1a-kube-api-access-qcxw7\") pod \"redhat-operators-842np\" (UID: \"6dafde82-f1b0-425b-8989-ff9c7df7de1a\") " pod="openshift-marketplace/redhat-operators-842np" Jan 31 04:42:25 crc kubenswrapper[4827]: I0131 04:42:25.197246 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dafde82-f1b0-425b-8989-ff9c7df7de1a-catalog-content\") pod \"redhat-operators-842np\" (UID: \"6dafde82-f1b0-425b-8989-ff9c7df7de1a\") " pod="openshift-marketplace/redhat-operators-842np" Jan 31 04:42:25 crc kubenswrapper[4827]: I0131 04:42:25.197451 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dafde82-f1b0-425b-8989-ff9c7df7de1a-utilities\") pod \"redhat-operators-842np\" (UID: \"6dafde82-f1b0-425b-8989-ff9c7df7de1a\") " pod="openshift-marketplace/redhat-operators-842np" Jan 31 04:42:25 crc kubenswrapper[4827]: I0131 04:42:25.198206 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dafde82-f1b0-425b-8989-ff9c7df7de1a-utilities\") pod \"redhat-operators-842np\" (UID: \"6dafde82-f1b0-425b-8989-ff9c7df7de1a\") " pod="openshift-marketplace/redhat-operators-842np" Jan 31 04:42:25 crc kubenswrapper[4827]: I0131 04:42:25.198741 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dafde82-f1b0-425b-8989-ff9c7df7de1a-catalog-content\") pod \"redhat-operators-842np\" (UID: \"6dafde82-f1b0-425b-8989-ff9c7df7de1a\") " pod="openshift-marketplace/redhat-operators-842np" Jan 31 04:42:25 crc kubenswrapper[4827]: I0131 04:42:25.217273 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcxw7\" (UniqueName: \"kubernetes.io/projected/6dafde82-f1b0-425b-8989-ff9c7df7de1a-kube-api-access-qcxw7\") pod \"redhat-operators-842np\" (UID: \"6dafde82-f1b0-425b-8989-ff9c7df7de1a\") " pod="openshift-marketplace/redhat-operators-842np" Jan 31 04:42:25 crc kubenswrapper[4827]: I0131 04:42:25.354301 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-842np" Jan 31 04:42:25 crc kubenswrapper[4827]: I0131 04:42:25.843368 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-842np"] Jan 31 04:42:26 crc kubenswrapper[4827]: I0131 04:42:26.652089 4827 generic.go:334] "Generic (PLEG): container finished" podID="6dafde82-f1b0-425b-8989-ff9c7df7de1a" containerID="1305da3637841511d23031d179cb0c452f7191d61100f98bb85172e3d299aaf5" exitCode=0 Jan 31 04:42:26 crc kubenswrapper[4827]: I0131 04:42:26.652206 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-842np" event={"ID":"6dafde82-f1b0-425b-8989-ff9c7df7de1a","Type":"ContainerDied","Data":"1305da3637841511d23031d179cb0c452f7191d61100f98bb85172e3d299aaf5"} Jan 31 04:42:26 crc kubenswrapper[4827]: I0131 04:42:26.653235 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-842np" event={"ID":"6dafde82-f1b0-425b-8989-ff9c7df7de1a","Type":"ContainerStarted","Data":"68f389fae1ef82a4eba329814ecaed66ab7db4c27b216df558dc6fdf293b74a1"} Jan 31 04:42:26 crc kubenswrapper[4827]: I0131 04:42:26.655190 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:42:27 crc kubenswrapper[4827]: I0131 04:42:27.662263 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-842np" event={"ID":"6dafde82-f1b0-425b-8989-ff9c7df7de1a","Type":"ContainerStarted","Data":"1251e48728c037a04a90247b938035a299bfb0405a06b7df40cf85ecbca5c0e6"} Jan 31 04:42:28 crc kubenswrapper[4827]: I0131 04:42:28.670539 4827 generic.go:334] "Generic (PLEG): container finished" podID="6dafde82-f1b0-425b-8989-ff9c7df7de1a" containerID="1251e48728c037a04a90247b938035a299bfb0405a06b7df40cf85ecbca5c0e6" exitCode=0 Jan 31 04:42:28 crc kubenswrapper[4827]: I0131 04:42:28.670600 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-842np" event={"ID":"6dafde82-f1b0-425b-8989-ff9c7df7de1a","Type":"ContainerDied","Data":"1251e48728c037a04a90247b938035a299bfb0405a06b7df40cf85ecbca5c0e6"} Jan 31 04:42:29 crc kubenswrapper[4827]: I0131 04:42:29.682672 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-842np" event={"ID":"6dafde82-f1b0-425b-8989-ff9c7df7de1a","Type":"ContainerStarted","Data":"c266e1ceefa6c0d3738fcefec1016ded1a761652bc8bd5fc334e4290a77594fb"} Jan 31 04:42:29 crc kubenswrapper[4827]: I0131 04:42:29.700602 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-842np" podStartSLOduration=2.925578636 podStartE2EDuration="5.700584343s" podCreationTimestamp="2026-01-31 04:42:24 +0000 UTC" firstStartedPulling="2026-01-31 04:42:26.654931526 +0000 UTC m=+3339.342011975" lastFinishedPulling="2026-01-31 04:42:29.429937233 +0000 UTC m=+3342.117017682" observedRunningTime="2026-01-31 04:42:29.698066851 +0000 UTC m=+3342.385147310" watchObservedRunningTime="2026-01-31 04:42:29.700584343 +0000 UTC m=+3342.387664792" Jan 31 04:42:35 crc kubenswrapper[4827]: I0131 04:42:35.354940 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-842np" Jan 31 04:42:35 crc kubenswrapper[4827]: I0131 04:42:35.356916 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-842np" Jan 31 04:42:36 crc kubenswrapper[4827]: I0131 04:42:36.415368 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-842np" podUID="6dafde82-f1b0-425b-8989-ff9c7df7de1a" containerName="registry-server" probeResult="failure" output=< Jan 31 04:42:36 crc kubenswrapper[4827]: timeout: failed to connect service ":50051" within 1s Jan 31 04:42:36 crc kubenswrapper[4827]: > Jan 31 04:42:45 crc kubenswrapper[4827]: I0131 04:42:45.414694 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-842np" Jan 31 04:42:45 crc kubenswrapper[4827]: I0131 04:42:45.482054 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-842np" Jan 31 04:42:45 crc kubenswrapper[4827]: I0131 04:42:45.668197 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-842np"] Jan 31 04:42:46 crc kubenswrapper[4827]: I0131 04:42:46.836565 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-842np" podUID="6dafde82-f1b0-425b-8989-ff9c7df7de1a" containerName="registry-server" containerID="cri-o://c266e1ceefa6c0d3738fcefec1016ded1a761652bc8bd5fc334e4290a77594fb" gracePeriod=2 Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.426229 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-842np" Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.493525 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dafde82-f1b0-425b-8989-ff9c7df7de1a-catalog-content\") pod \"6dafde82-f1b0-425b-8989-ff9c7df7de1a\" (UID: \"6dafde82-f1b0-425b-8989-ff9c7df7de1a\") " Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.493589 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dafde82-f1b0-425b-8989-ff9c7df7de1a-utilities\") pod \"6dafde82-f1b0-425b-8989-ff9c7df7de1a\" (UID: \"6dafde82-f1b0-425b-8989-ff9c7df7de1a\") " Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.493777 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcxw7\" (UniqueName: \"kubernetes.io/projected/6dafde82-f1b0-425b-8989-ff9c7df7de1a-kube-api-access-qcxw7\") pod \"6dafde82-f1b0-425b-8989-ff9c7df7de1a\" (UID: \"6dafde82-f1b0-425b-8989-ff9c7df7de1a\") " Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.494876 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dafde82-f1b0-425b-8989-ff9c7df7de1a-utilities" (OuterVolumeSpecName: "utilities") pod "6dafde82-f1b0-425b-8989-ff9c7df7de1a" (UID: "6dafde82-f1b0-425b-8989-ff9c7df7de1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.499148 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dafde82-f1b0-425b-8989-ff9c7df7de1a-kube-api-access-qcxw7" (OuterVolumeSpecName: "kube-api-access-qcxw7") pod "6dafde82-f1b0-425b-8989-ff9c7df7de1a" (UID: "6dafde82-f1b0-425b-8989-ff9c7df7de1a"). InnerVolumeSpecName "kube-api-access-qcxw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.596394 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcxw7\" (UniqueName: \"kubernetes.io/projected/6dafde82-f1b0-425b-8989-ff9c7df7de1a-kube-api-access-qcxw7\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.596439 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dafde82-f1b0-425b-8989-ff9c7df7de1a-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.613816 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dafde82-f1b0-425b-8989-ff9c7df7de1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6dafde82-f1b0-425b-8989-ff9c7df7de1a" (UID: "6dafde82-f1b0-425b-8989-ff9c7df7de1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.697869 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dafde82-f1b0-425b-8989-ff9c7df7de1a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.849744 4827 generic.go:334] "Generic (PLEG): container finished" podID="6dafde82-f1b0-425b-8989-ff9c7df7de1a" containerID="c266e1ceefa6c0d3738fcefec1016ded1a761652bc8bd5fc334e4290a77594fb" exitCode=0 Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.849814 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-842np" event={"ID":"6dafde82-f1b0-425b-8989-ff9c7df7de1a","Type":"ContainerDied","Data":"c266e1ceefa6c0d3738fcefec1016ded1a761652bc8bd5fc334e4290a77594fb"} Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.849854 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-842np" event={"ID":"6dafde82-f1b0-425b-8989-ff9c7df7de1a","Type":"ContainerDied","Data":"68f389fae1ef82a4eba329814ecaed66ab7db4c27b216df558dc6fdf293b74a1"} Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.849917 4827 scope.go:117] "RemoveContainer" containerID="c266e1ceefa6c0d3738fcefec1016ded1a761652bc8bd5fc334e4290a77594fb" Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.850723 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-842np" Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.869031 4827 scope.go:117] "RemoveContainer" containerID="1251e48728c037a04a90247b938035a299bfb0405a06b7df40cf85ecbca5c0e6" Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.891558 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-842np"] Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.900642 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-842np"] Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.913078 4827 scope.go:117] "RemoveContainer" containerID="1305da3637841511d23031d179cb0c452f7191d61100f98bb85172e3d299aaf5" Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.942311 4827 scope.go:117] "RemoveContainer" containerID="c266e1ceefa6c0d3738fcefec1016ded1a761652bc8bd5fc334e4290a77594fb" Jan 31 04:42:47 crc kubenswrapper[4827]: E0131 04:42:47.943087 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c266e1ceefa6c0d3738fcefec1016ded1a761652bc8bd5fc334e4290a77594fb\": container with ID starting with c266e1ceefa6c0d3738fcefec1016ded1a761652bc8bd5fc334e4290a77594fb not found: ID does not exist" containerID="c266e1ceefa6c0d3738fcefec1016ded1a761652bc8bd5fc334e4290a77594fb" Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.943169 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c266e1ceefa6c0d3738fcefec1016ded1a761652bc8bd5fc334e4290a77594fb"} err="failed to get container status \"c266e1ceefa6c0d3738fcefec1016ded1a761652bc8bd5fc334e4290a77594fb\": rpc error: code = NotFound desc = could not find container \"c266e1ceefa6c0d3738fcefec1016ded1a761652bc8bd5fc334e4290a77594fb\": container with ID starting with c266e1ceefa6c0d3738fcefec1016ded1a761652bc8bd5fc334e4290a77594fb not found: ID does not exist" Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.943200 4827 scope.go:117] "RemoveContainer" containerID="1251e48728c037a04a90247b938035a299bfb0405a06b7df40cf85ecbca5c0e6" Jan 31 04:42:47 crc kubenswrapper[4827]: E0131 04:42:47.943687 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1251e48728c037a04a90247b938035a299bfb0405a06b7df40cf85ecbca5c0e6\": container with ID starting with 1251e48728c037a04a90247b938035a299bfb0405a06b7df40cf85ecbca5c0e6 not found: ID does not exist" containerID="1251e48728c037a04a90247b938035a299bfb0405a06b7df40cf85ecbca5c0e6" Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.943771 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1251e48728c037a04a90247b938035a299bfb0405a06b7df40cf85ecbca5c0e6"} err="failed to get container status \"1251e48728c037a04a90247b938035a299bfb0405a06b7df40cf85ecbca5c0e6\": rpc error: code = NotFound desc = could not find container \"1251e48728c037a04a90247b938035a299bfb0405a06b7df40cf85ecbca5c0e6\": container with ID starting with 1251e48728c037a04a90247b938035a299bfb0405a06b7df40cf85ecbca5c0e6 not found: ID does not exist" Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.943842 4827 scope.go:117] "RemoveContainer" containerID="1305da3637841511d23031d179cb0c452f7191d61100f98bb85172e3d299aaf5" Jan 31 04:42:47 crc kubenswrapper[4827]: E0131 04:42:47.944499 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1305da3637841511d23031d179cb0c452f7191d61100f98bb85172e3d299aaf5\": container with ID starting with 1305da3637841511d23031d179cb0c452f7191d61100f98bb85172e3d299aaf5 not found: ID does not exist" containerID="1305da3637841511d23031d179cb0c452f7191d61100f98bb85172e3d299aaf5" Jan 31 04:42:47 crc kubenswrapper[4827]: I0131 04:42:47.944537 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1305da3637841511d23031d179cb0c452f7191d61100f98bb85172e3d299aaf5"} err="failed to get container status \"1305da3637841511d23031d179cb0c452f7191d61100f98bb85172e3d299aaf5\": rpc error: code = NotFound desc = could not find container \"1305da3637841511d23031d179cb0c452f7191d61100f98bb85172e3d299aaf5\": container with ID starting with 1305da3637841511d23031d179cb0c452f7191d61100f98bb85172e3d299aaf5 not found: ID does not exist" Jan 31 04:42:48 crc kubenswrapper[4827]: I0131 04:42:48.122452 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dafde82-f1b0-425b-8989-ff9c7df7de1a" path="/var/lib/kubelet/pods/6dafde82-f1b0-425b-8989-ff9c7df7de1a/volumes" Jan 31 04:43:02 crc kubenswrapper[4827]: I0131 04:43:02.618000 4827 scope.go:117] "RemoveContainer" containerID="054cb93dcee9e3aa3135bd5b78636260d1cc82953b362a08e9980573a335bbfe" Jan 31 04:43:02 crc kubenswrapper[4827]: I0131 04:43:02.815721 4827 scope.go:117] "RemoveContainer" containerID="982885989dfa2eabcef1f83bdcefbb60ad897f3b61c171664f5faf3d56eb18c4" Jan 31 04:43:17 crc kubenswrapper[4827]: I0131 04:43:17.371671 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:43:17 crc kubenswrapper[4827]: I0131 04:43:17.372375 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:43:47 crc kubenswrapper[4827]: I0131 04:43:47.371406 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:43:47 crc kubenswrapper[4827]: I0131 04:43:47.372194 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:44:02 crc kubenswrapper[4827]: I0131 04:44:02.884165 4827 scope.go:117] "RemoveContainer" containerID="43f9dbf506ce0895c32ae489bc2253e84bc9b42d1a9b5b0bf86a3bce147b8c42" Jan 31 04:44:02 crc kubenswrapper[4827]: I0131 04:44:02.909950 4827 scope.go:117] "RemoveContainer" containerID="152846634bc6b97beb4860a571669f482f009ee638e76a9aaff3198ffa7f149b" Jan 31 04:44:02 crc kubenswrapper[4827]: I0131 04:44:02.936422 4827 scope.go:117] "RemoveContainer" containerID="bc15967c2856acfcaa4651e8def340a34fa9f255ab2fcf5c7886444cd6f55f7e" Jan 31 04:44:02 crc kubenswrapper[4827]: I0131 04:44:02.963642 4827 scope.go:117] "RemoveContainer" containerID="27dc8c9f8821a643091ac59a85e9ff5489a69aadd17ea1b10d9678eac4780ed6" Jan 31 04:44:17 crc kubenswrapper[4827]: I0131 04:44:17.371174 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:44:17 crc kubenswrapper[4827]: I0131 04:44:17.371698 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:44:17 crc kubenswrapper[4827]: I0131 04:44:17.371748 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 04:44:17 crc kubenswrapper[4827]: I0131 04:44:17.372435 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7954efcbb2345b683530b21570dd478327d15f837551cf0700c83e0134e75fef"} pod="openshift-machine-config-operator/machine-config-daemon-jxh94" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:44:17 crc kubenswrapper[4827]: I0131 04:44:17.372486 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" containerID="cri-o://7954efcbb2345b683530b21570dd478327d15f837551cf0700c83e0134e75fef" gracePeriod=600 Jan 31 04:44:17 crc kubenswrapper[4827]: I0131 04:44:17.866490 4827 generic.go:334] "Generic (PLEG): container finished" podID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerID="7954efcbb2345b683530b21570dd478327d15f837551cf0700c83e0134e75fef" exitCode=0 Jan 31 04:44:17 crc kubenswrapper[4827]: I0131 04:44:17.866567 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerDied","Data":"7954efcbb2345b683530b21570dd478327d15f837551cf0700c83e0134e75fef"} Jan 31 04:44:17 crc kubenswrapper[4827]: I0131 04:44:17.866935 4827 scope.go:117] "RemoveContainer" containerID="b7f195383b718c3f3a1651e752e04a9c322b656ce5c7aba0c9ad40630165c57d" Jan 31 04:44:18 crc kubenswrapper[4827]: I0131 04:44:18.877269 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909"} Jan 31 04:45:00 crc kubenswrapper[4827]: I0131 04:45:00.154616 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497245-wjmpg"] Jan 31 04:45:00 crc kubenswrapper[4827]: E0131 04:45:00.155832 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dafde82-f1b0-425b-8989-ff9c7df7de1a" containerName="extract-utilities" Jan 31 04:45:00 crc kubenswrapper[4827]: I0131 04:45:00.155856 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dafde82-f1b0-425b-8989-ff9c7df7de1a" containerName="extract-utilities" Jan 31 04:45:00 crc kubenswrapper[4827]: E0131 04:45:00.155903 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dafde82-f1b0-425b-8989-ff9c7df7de1a" containerName="extract-content" Jan 31 04:45:00 crc kubenswrapper[4827]: I0131 04:45:00.155912 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dafde82-f1b0-425b-8989-ff9c7df7de1a" containerName="extract-content" Jan 31 04:45:00 crc kubenswrapper[4827]: E0131 04:45:00.155923 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dafde82-f1b0-425b-8989-ff9c7df7de1a" containerName="registry-server" Jan 31 04:45:00 crc kubenswrapper[4827]: I0131 04:45:00.155932 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dafde82-f1b0-425b-8989-ff9c7df7de1a" containerName="registry-server" Jan 31 04:45:00 crc kubenswrapper[4827]: I0131 04:45:00.156169 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dafde82-f1b0-425b-8989-ff9c7df7de1a" containerName="registry-server" Jan 31 04:45:00 crc kubenswrapper[4827]: I0131 04:45:00.157008 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-wjmpg" Jan 31 04:45:00 crc kubenswrapper[4827]: I0131 04:45:00.159099 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 04:45:00 crc kubenswrapper[4827]: I0131 04:45:00.159229 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 04:45:00 crc kubenswrapper[4827]: I0131 04:45:00.165472 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497245-wjmpg"] Jan 31 04:45:00 crc kubenswrapper[4827]: I0131 04:45:00.342634 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26wm6\" (UniqueName: \"kubernetes.io/projected/5348c7da-0d96-47ed-95e5-d6badfa59c46-kube-api-access-26wm6\") pod \"collect-profiles-29497245-wjmpg\" (UID: \"5348c7da-0d96-47ed-95e5-d6badfa59c46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-wjmpg" Jan 31 04:45:00 crc kubenswrapper[4827]: I0131 04:45:00.343423 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5348c7da-0d96-47ed-95e5-d6badfa59c46-secret-volume\") pod \"collect-profiles-29497245-wjmpg\" (UID: \"5348c7da-0d96-47ed-95e5-d6badfa59c46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-wjmpg" Jan 31 04:45:00 crc kubenswrapper[4827]: I0131 04:45:00.343555 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5348c7da-0d96-47ed-95e5-d6badfa59c46-config-volume\") pod \"collect-profiles-29497245-wjmpg\" (UID: \"5348c7da-0d96-47ed-95e5-d6badfa59c46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-wjmpg" Jan 31 04:45:00 crc kubenswrapper[4827]: I0131 04:45:00.445603 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26wm6\" (UniqueName: \"kubernetes.io/projected/5348c7da-0d96-47ed-95e5-d6badfa59c46-kube-api-access-26wm6\") pod \"collect-profiles-29497245-wjmpg\" (UID: \"5348c7da-0d96-47ed-95e5-d6badfa59c46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-wjmpg" Jan 31 04:45:00 crc kubenswrapper[4827]: I0131 04:45:00.445657 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5348c7da-0d96-47ed-95e5-d6badfa59c46-secret-volume\") pod \"collect-profiles-29497245-wjmpg\" (UID: \"5348c7da-0d96-47ed-95e5-d6badfa59c46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-wjmpg" Jan 31 04:45:00 crc kubenswrapper[4827]: I0131 04:45:00.445738 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5348c7da-0d96-47ed-95e5-d6badfa59c46-config-volume\") pod \"collect-profiles-29497245-wjmpg\" (UID: \"5348c7da-0d96-47ed-95e5-d6badfa59c46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-wjmpg" Jan 31 04:45:00 crc kubenswrapper[4827]: I0131 04:45:00.446658 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5348c7da-0d96-47ed-95e5-d6badfa59c46-config-volume\") pod \"collect-profiles-29497245-wjmpg\" (UID: \"5348c7da-0d96-47ed-95e5-d6badfa59c46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-wjmpg" Jan 31 04:45:00 crc kubenswrapper[4827]: I0131 04:45:00.457753 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5348c7da-0d96-47ed-95e5-d6badfa59c46-secret-volume\") pod \"collect-profiles-29497245-wjmpg\" (UID: \"5348c7da-0d96-47ed-95e5-d6badfa59c46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-wjmpg" Jan 31 04:45:00 crc kubenswrapper[4827]: I0131 04:45:00.465048 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26wm6\" (UniqueName: \"kubernetes.io/projected/5348c7da-0d96-47ed-95e5-d6badfa59c46-kube-api-access-26wm6\") pod \"collect-profiles-29497245-wjmpg\" (UID: \"5348c7da-0d96-47ed-95e5-d6badfa59c46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-wjmpg" Jan 31 04:45:00 crc kubenswrapper[4827]: I0131 04:45:00.478259 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-wjmpg" Jan 31 04:45:00 crc kubenswrapper[4827]: I0131 04:45:00.963086 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497245-wjmpg"] Jan 31 04:45:00 crc kubenswrapper[4827]: W0131 04:45:00.970126 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5348c7da_0d96_47ed_95e5_d6badfa59c46.slice/crio-943cc7fc1b8d1cc15930a37af10caf0f2d493fdfbebaa7150b2122399f8cb094 WatchSource:0}: Error finding container 943cc7fc1b8d1cc15930a37af10caf0f2d493fdfbebaa7150b2122399f8cb094: Status 404 returned error can't find the container with id 943cc7fc1b8d1cc15930a37af10caf0f2d493fdfbebaa7150b2122399f8cb094 Jan 31 04:45:01 crc kubenswrapper[4827]: I0131 04:45:01.303787 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-wjmpg" event={"ID":"5348c7da-0d96-47ed-95e5-d6badfa59c46","Type":"ContainerStarted","Data":"0474c0b132001c1ad9d9b93ce109cdfa5bdfed3629d17a82def7fbc3f05932d3"} Jan 31 04:45:01 crc kubenswrapper[4827]: I0131 04:45:01.304391 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-wjmpg" event={"ID":"5348c7da-0d96-47ed-95e5-d6badfa59c46","Type":"ContainerStarted","Data":"943cc7fc1b8d1cc15930a37af10caf0f2d493fdfbebaa7150b2122399f8cb094"} Jan 31 04:45:01 crc kubenswrapper[4827]: I0131 04:45:01.336063 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-wjmpg" podStartSLOduration=1.336041073 podStartE2EDuration="1.336041073s" podCreationTimestamp="2026-01-31 04:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:01.323319677 +0000 UTC m=+3494.010400136" watchObservedRunningTime="2026-01-31 04:45:01.336041073 +0000 UTC m=+3494.023121522" Jan 31 04:45:02 crc kubenswrapper[4827]: I0131 04:45:02.314135 4827 generic.go:334] "Generic (PLEG): container finished" podID="5348c7da-0d96-47ed-95e5-d6badfa59c46" containerID="0474c0b132001c1ad9d9b93ce109cdfa5bdfed3629d17a82def7fbc3f05932d3" exitCode=0 Jan 31 04:45:02 crc kubenswrapper[4827]: I0131 04:45:02.314181 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-wjmpg" event={"ID":"5348c7da-0d96-47ed-95e5-d6badfa59c46","Type":"ContainerDied","Data":"0474c0b132001c1ad9d9b93ce109cdfa5bdfed3629d17a82def7fbc3f05932d3"} Jan 31 04:45:03 crc kubenswrapper[4827]: I0131 04:45:03.808655 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-wjmpg" Jan 31 04:45:03 crc kubenswrapper[4827]: I0131 04:45:03.915568 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26wm6\" (UniqueName: \"kubernetes.io/projected/5348c7da-0d96-47ed-95e5-d6badfa59c46-kube-api-access-26wm6\") pod \"5348c7da-0d96-47ed-95e5-d6badfa59c46\" (UID: \"5348c7da-0d96-47ed-95e5-d6badfa59c46\") " Jan 31 04:45:03 crc kubenswrapper[4827]: I0131 04:45:03.915697 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5348c7da-0d96-47ed-95e5-d6badfa59c46-secret-volume\") pod \"5348c7da-0d96-47ed-95e5-d6badfa59c46\" (UID: \"5348c7da-0d96-47ed-95e5-d6badfa59c46\") " Jan 31 04:45:03 crc kubenswrapper[4827]: I0131 04:45:03.915838 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5348c7da-0d96-47ed-95e5-d6badfa59c46-config-volume\") pod \"5348c7da-0d96-47ed-95e5-d6badfa59c46\" (UID: \"5348c7da-0d96-47ed-95e5-d6badfa59c46\") " Jan 31 04:45:03 crc kubenswrapper[4827]: I0131 04:45:03.916833 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5348c7da-0d96-47ed-95e5-d6badfa59c46-config-volume" (OuterVolumeSpecName: "config-volume") pod "5348c7da-0d96-47ed-95e5-d6badfa59c46" (UID: "5348c7da-0d96-47ed-95e5-d6badfa59c46"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:45:03 crc kubenswrapper[4827]: I0131 04:45:03.924034 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5348c7da-0d96-47ed-95e5-d6badfa59c46-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5348c7da-0d96-47ed-95e5-d6badfa59c46" (UID: "5348c7da-0d96-47ed-95e5-d6badfa59c46"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:45:03 crc kubenswrapper[4827]: I0131 04:45:03.924051 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5348c7da-0d96-47ed-95e5-d6badfa59c46-kube-api-access-26wm6" (OuterVolumeSpecName: "kube-api-access-26wm6") pod "5348c7da-0d96-47ed-95e5-d6badfa59c46" (UID: "5348c7da-0d96-47ed-95e5-d6badfa59c46"). InnerVolumeSpecName "kube-api-access-26wm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:45:04 crc kubenswrapper[4827]: I0131 04:45:04.018640 4827 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5348c7da-0d96-47ed-95e5-d6badfa59c46-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:04 crc kubenswrapper[4827]: I0131 04:45:04.019186 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26wm6\" (UniqueName: \"kubernetes.io/projected/5348c7da-0d96-47ed-95e5-d6badfa59c46-kube-api-access-26wm6\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:04 crc kubenswrapper[4827]: I0131 04:45:04.019206 4827 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5348c7da-0d96-47ed-95e5-d6badfa59c46-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:04 crc kubenswrapper[4827]: I0131 04:45:04.335534 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-wjmpg" event={"ID":"5348c7da-0d96-47ed-95e5-d6badfa59c46","Type":"ContainerDied","Data":"943cc7fc1b8d1cc15930a37af10caf0f2d493fdfbebaa7150b2122399f8cb094"} Jan 31 04:45:04 crc kubenswrapper[4827]: I0131 04:45:04.335577 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="943cc7fc1b8d1cc15930a37af10caf0f2d493fdfbebaa7150b2122399f8cb094" Jan 31 04:45:04 crc kubenswrapper[4827]: I0131 04:45:04.335618 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-wjmpg" Jan 31 04:45:04 crc kubenswrapper[4827]: I0131 04:45:04.402996 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497200-khhqw"] Jan 31 04:45:04 crc kubenswrapper[4827]: I0131 04:45:04.424852 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497200-khhqw"] Jan 31 04:45:06 crc kubenswrapper[4827]: I0131 04:45:06.129055 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5e6d27b-159b-4fb1-98d7-da2ae62fe95b" path="/var/lib/kubelet/pods/a5e6d27b-159b-4fb1-98d7-da2ae62fe95b/volumes" Jan 31 04:45:23 crc kubenswrapper[4827]: I0131 04:45:23.221609 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hf6kf"] Jan 31 04:45:23 crc kubenswrapper[4827]: E0131 04:45:23.222546 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5348c7da-0d96-47ed-95e5-d6badfa59c46" containerName="collect-profiles" Jan 31 04:45:23 crc kubenswrapper[4827]: I0131 04:45:23.222558 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5348c7da-0d96-47ed-95e5-d6badfa59c46" containerName="collect-profiles" Jan 31 04:45:23 crc kubenswrapper[4827]: I0131 04:45:23.222766 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="5348c7da-0d96-47ed-95e5-d6badfa59c46" containerName="collect-profiles" Jan 31 04:45:23 crc kubenswrapper[4827]: I0131 04:45:23.224301 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hf6kf" Jan 31 04:45:23 crc kubenswrapper[4827]: I0131 04:45:23.287455 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hf6kf"] Jan 31 04:45:23 crc kubenswrapper[4827]: I0131 04:45:23.399893 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdxls\" (UniqueName: \"kubernetes.io/projected/b54ea009-caad-4013-becb-1ac3e39044e0-kube-api-access-vdxls\") pod \"redhat-marketplace-hf6kf\" (UID: \"b54ea009-caad-4013-becb-1ac3e39044e0\") " pod="openshift-marketplace/redhat-marketplace-hf6kf" Jan 31 04:45:23 crc kubenswrapper[4827]: I0131 04:45:23.400168 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b54ea009-caad-4013-becb-1ac3e39044e0-catalog-content\") pod \"redhat-marketplace-hf6kf\" (UID: \"b54ea009-caad-4013-becb-1ac3e39044e0\") " pod="openshift-marketplace/redhat-marketplace-hf6kf" Jan 31 04:45:23 crc kubenswrapper[4827]: I0131 04:45:23.400231 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b54ea009-caad-4013-becb-1ac3e39044e0-utilities\") pod \"redhat-marketplace-hf6kf\" (UID: \"b54ea009-caad-4013-becb-1ac3e39044e0\") " pod="openshift-marketplace/redhat-marketplace-hf6kf" Jan 31 04:45:23 crc kubenswrapper[4827]: I0131 04:45:23.501692 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b54ea009-caad-4013-becb-1ac3e39044e0-catalog-content\") pod \"redhat-marketplace-hf6kf\" (UID: \"b54ea009-caad-4013-becb-1ac3e39044e0\") " pod="openshift-marketplace/redhat-marketplace-hf6kf" Jan 31 04:45:23 crc kubenswrapper[4827]: I0131 04:45:23.501750 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b54ea009-caad-4013-becb-1ac3e39044e0-utilities\") pod \"redhat-marketplace-hf6kf\" (UID: \"b54ea009-caad-4013-becb-1ac3e39044e0\") " pod="openshift-marketplace/redhat-marketplace-hf6kf" Jan 31 04:45:23 crc kubenswrapper[4827]: I0131 04:45:23.501793 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdxls\" (UniqueName: \"kubernetes.io/projected/b54ea009-caad-4013-becb-1ac3e39044e0-kube-api-access-vdxls\") pod \"redhat-marketplace-hf6kf\" (UID: \"b54ea009-caad-4013-becb-1ac3e39044e0\") " pod="openshift-marketplace/redhat-marketplace-hf6kf" Jan 31 04:45:23 crc kubenswrapper[4827]: I0131 04:45:23.502237 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b54ea009-caad-4013-becb-1ac3e39044e0-catalog-content\") pod \"redhat-marketplace-hf6kf\" (UID: \"b54ea009-caad-4013-becb-1ac3e39044e0\") " pod="openshift-marketplace/redhat-marketplace-hf6kf" Jan 31 04:45:23 crc kubenswrapper[4827]: I0131 04:45:23.502348 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b54ea009-caad-4013-becb-1ac3e39044e0-utilities\") pod \"redhat-marketplace-hf6kf\" (UID: \"b54ea009-caad-4013-becb-1ac3e39044e0\") " pod="openshift-marketplace/redhat-marketplace-hf6kf" Jan 31 04:45:23 crc kubenswrapper[4827]: I0131 04:45:23.523238 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdxls\" (UniqueName: \"kubernetes.io/projected/b54ea009-caad-4013-becb-1ac3e39044e0-kube-api-access-vdxls\") pod \"redhat-marketplace-hf6kf\" (UID: \"b54ea009-caad-4013-becb-1ac3e39044e0\") " pod="openshift-marketplace/redhat-marketplace-hf6kf" Jan 31 04:45:23 crc kubenswrapper[4827]: I0131 04:45:23.546267 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hf6kf" Jan 31 04:45:24 crc kubenswrapper[4827]: I0131 04:45:24.012514 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hf6kf"] Jan 31 04:45:24 crc kubenswrapper[4827]: W0131 04:45:24.025292 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb54ea009_caad_4013_becb_1ac3e39044e0.slice/crio-6fc8c813ee8846d20b6cd41d250167f3ba08258e17b33fedbe7b6cd232315f75 WatchSource:0}: Error finding container 6fc8c813ee8846d20b6cd41d250167f3ba08258e17b33fedbe7b6cd232315f75: Status 404 returned error can't find the container with id 6fc8c813ee8846d20b6cd41d250167f3ba08258e17b33fedbe7b6cd232315f75 Jan 31 04:45:24 crc kubenswrapper[4827]: I0131 04:45:24.537137 4827 generic.go:334] "Generic (PLEG): container finished" podID="b54ea009-caad-4013-becb-1ac3e39044e0" containerID="44a25057fdf969828b6c665cf33c2bb656ef502adf683543db40f51a9de90675" exitCode=0 Jan 31 04:45:24 crc kubenswrapper[4827]: I0131 04:45:24.537234 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hf6kf" event={"ID":"b54ea009-caad-4013-becb-1ac3e39044e0","Type":"ContainerDied","Data":"44a25057fdf969828b6c665cf33c2bb656ef502adf683543db40f51a9de90675"} Jan 31 04:45:24 crc kubenswrapper[4827]: I0131 04:45:24.537532 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hf6kf" event={"ID":"b54ea009-caad-4013-becb-1ac3e39044e0","Type":"ContainerStarted","Data":"6fc8c813ee8846d20b6cd41d250167f3ba08258e17b33fedbe7b6cd232315f75"} Jan 31 04:45:25 crc kubenswrapper[4827]: I0131 04:45:25.547047 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hf6kf" event={"ID":"b54ea009-caad-4013-becb-1ac3e39044e0","Type":"ContainerStarted","Data":"a5df495042e85b3f05f34c2a40039499b80a36a47bd0e6add52c0e7684e8b077"} Jan 31 04:45:26 crc kubenswrapper[4827]: I0131 04:45:26.559368 4827 generic.go:334] "Generic (PLEG): container finished" podID="b54ea009-caad-4013-becb-1ac3e39044e0" containerID="a5df495042e85b3f05f34c2a40039499b80a36a47bd0e6add52c0e7684e8b077" exitCode=0 Jan 31 04:45:26 crc kubenswrapper[4827]: I0131 04:45:26.559497 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hf6kf" event={"ID":"b54ea009-caad-4013-becb-1ac3e39044e0","Type":"ContainerDied","Data":"a5df495042e85b3f05f34c2a40039499b80a36a47bd0e6add52c0e7684e8b077"} Jan 31 04:45:28 crc kubenswrapper[4827]: I0131 04:45:28.580541 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hf6kf" event={"ID":"b54ea009-caad-4013-becb-1ac3e39044e0","Type":"ContainerStarted","Data":"87655177a5683e00b2fcbfc1c8fcf07769638b0ae1d3c955405689fbbfacd44a"} Jan 31 04:45:28 crc kubenswrapper[4827]: I0131 04:45:28.604698 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hf6kf" podStartSLOduration=3.062006244 podStartE2EDuration="5.604682888s" podCreationTimestamp="2026-01-31 04:45:23 +0000 UTC" firstStartedPulling="2026-01-31 04:45:24.540152248 +0000 UTC m=+3517.227232697" lastFinishedPulling="2026-01-31 04:45:27.082828892 +0000 UTC m=+3519.769909341" observedRunningTime="2026-01-31 04:45:28.59798022 +0000 UTC m=+3521.285060669" watchObservedRunningTime="2026-01-31 04:45:28.604682888 +0000 UTC m=+3521.291763337" Jan 31 04:45:33 crc kubenswrapper[4827]: I0131 04:45:33.546707 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hf6kf" Jan 31 04:45:33 crc kubenswrapper[4827]: I0131 04:45:33.547410 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hf6kf" Jan 31 04:45:33 crc kubenswrapper[4827]: I0131 04:45:33.617777 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hf6kf" Jan 31 04:45:33 crc kubenswrapper[4827]: I0131 04:45:33.703099 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hf6kf" Jan 31 04:45:33 crc kubenswrapper[4827]: I0131 04:45:33.868545 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hf6kf"] Jan 31 04:45:35 crc kubenswrapper[4827]: I0131 04:45:35.654406 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hf6kf" podUID="b54ea009-caad-4013-becb-1ac3e39044e0" containerName="registry-server" containerID="cri-o://87655177a5683e00b2fcbfc1c8fcf07769638b0ae1d3c955405689fbbfacd44a" gracePeriod=2 Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.291511 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hf6kf" Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.376515 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdxls\" (UniqueName: \"kubernetes.io/projected/b54ea009-caad-4013-becb-1ac3e39044e0-kube-api-access-vdxls\") pod \"b54ea009-caad-4013-becb-1ac3e39044e0\" (UID: \"b54ea009-caad-4013-becb-1ac3e39044e0\") " Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.376592 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b54ea009-caad-4013-becb-1ac3e39044e0-catalog-content\") pod \"b54ea009-caad-4013-becb-1ac3e39044e0\" (UID: \"b54ea009-caad-4013-becb-1ac3e39044e0\") " Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.376703 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b54ea009-caad-4013-becb-1ac3e39044e0-utilities\") pod \"b54ea009-caad-4013-becb-1ac3e39044e0\" (UID: \"b54ea009-caad-4013-becb-1ac3e39044e0\") " Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.377852 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b54ea009-caad-4013-becb-1ac3e39044e0-utilities" (OuterVolumeSpecName: "utilities") pod "b54ea009-caad-4013-becb-1ac3e39044e0" (UID: "b54ea009-caad-4013-becb-1ac3e39044e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.388285 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b54ea009-caad-4013-becb-1ac3e39044e0-kube-api-access-vdxls" (OuterVolumeSpecName: "kube-api-access-vdxls") pod "b54ea009-caad-4013-becb-1ac3e39044e0" (UID: "b54ea009-caad-4013-becb-1ac3e39044e0"). InnerVolumeSpecName "kube-api-access-vdxls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.403328 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b54ea009-caad-4013-becb-1ac3e39044e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b54ea009-caad-4013-becb-1ac3e39044e0" (UID: "b54ea009-caad-4013-becb-1ac3e39044e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.479052 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdxls\" (UniqueName: \"kubernetes.io/projected/b54ea009-caad-4013-becb-1ac3e39044e0-kube-api-access-vdxls\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.479094 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b54ea009-caad-4013-becb-1ac3e39044e0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.479107 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b54ea009-caad-4013-becb-1ac3e39044e0-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.663258 4827 generic.go:334] "Generic (PLEG): container finished" podID="b54ea009-caad-4013-becb-1ac3e39044e0" containerID="87655177a5683e00b2fcbfc1c8fcf07769638b0ae1d3c955405689fbbfacd44a" exitCode=0 Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.663297 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hf6kf" event={"ID":"b54ea009-caad-4013-becb-1ac3e39044e0","Type":"ContainerDied","Data":"87655177a5683e00b2fcbfc1c8fcf07769638b0ae1d3c955405689fbbfacd44a"} Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.663316 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hf6kf" Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.663330 4827 scope.go:117] "RemoveContainer" containerID="87655177a5683e00b2fcbfc1c8fcf07769638b0ae1d3c955405689fbbfacd44a" Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.663320 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hf6kf" event={"ID":"b54ea009-caad-4013-becb-1ac3e39044e0","Type":"ContainerDied","Data":"6fc8c813ee8846d20b6cd41d250167f3ba08258e17b33fedbe7b6cd232315f75"} Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.706232 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hf6kf"] Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.711036 4827 scope.go:117] "RemoveContainer" containerID="a5df495042e85b3f05f34c2a40039499b80a36a47bd0e6add52c0e7684e8b077" Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.713246 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hf6kf"] Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.729128 4827 scope.go:117] "RemoveContainer" containerID="44a25057fdf969828b6c665cf33c2bb656ef502adf683543db40f51a9de90675" Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.773386 4827 scope.go:117] "RemoveContainer" containerID="87655177a5683e00b2fcbfc1c8fcf07769638b0ae1d3c955405689fbbfacd44a" Jan 31 04:45:36 crc kubenswrapper[4827]: E0131 04:45:36.773774 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87655177a5683e00b2fcbfc1c8fcf07769638b0ae1d3c955405689fbbfacd44a\": container with ID starting with 87655177a5683e00b2fcbfc1c8fcf07769638b0ae1d3c955405689fbbfacd44a not found: ID does not exist" containerID="87655177a5683e00b2fcbfc1c8fcf07769638b0ae1d3c955405689fbbfacd44a" Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.773816 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87655177a5683e00b2fcbfc1c8fcf07769638b0ae1d3c955405689fbbfacd44a"} err="failed to get container status \"87655177a5683e00b2fcbfc1c8fcf07769638b0ae1d3c955405689fbbfacd44a\": rpc error: code = NotFound desc = could not find container \"87655177a5683e00b2fcbfc1c8fcf07769638b0ae1d3c955405689fbbfacd44a\": container with ID starting with 87655177a5683e00b2fcbfc1c8fcf07769638b0ae1d3c955405689fbbfacd44a not found: ID does not exist" Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.773842 4827 scope.go:117] "RemoveContainer" containerID="a5df495042e85b3f05f34c2a40039499b80a36a47bd0e6add52c0e7684e8b077" Jan 31 04:45:36 crc kubenswrapper[4827]: E0131 04:45:36.774063 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5df495042e85b3f05f34c2a40039499b80a36a47bd0e6add52c0e7684e8b077\": container with ID starting with a5df495042e85b3f05f34c2a40039499b80a36a47bd0e6add52c0e7684e8b077 not found: ID does not exist" containerID="a5df495042e85b3f05f34c2a40039499b80a36a47bd0e6add52c0e7684e8b077" Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.774085 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5df495042e85b3f05f34c2a40039499b80a36a47bd0e6add52c0e7684e8b077"} err="failed to get container status \"a5df495042e85b3f05f34c2a40039499b80a36a47bd0e6add52c0e7684e8b077\": rpc error: code = NotFound desc = could not find container \"a5df495042e85b3f05f34c2a40039499b80a36a47bd0e6add52c0e7684e8b077\": container with ID starting with a5df495042e85b3f05f34c2a40039499b80a36a47bd0e6add52c0e7684e8b077 not found: ID does not exist" Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.774098 4827 scope.go:117] "RemoveContainer" containerID="44a25057fdf969828b6c665cf33c2bb656ef502adf683543db40f51a9de90675" Jan 31 04:45:36 crc kubenswrapper[4827]: E0131 04:45:36.774254 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44a25057fdf969828b6c665cf33c2bb656ef502adf683543db40f51a9de90675\": container with ID starting with 44a25057fdf969828b6c665cf33c2bb656ef502adf683543db40f51a9de90675 not found: ID does not exist" containerID="44a25057fdf969828b6c665cf33c2bb656ef502adf683543db40f51a9de90675" Jan 31 04:45:36 crc kubenswrapper[4827]: I0131 04:45:36.774273 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44a25057fdf969828b6c665cf33c2bb656ef502adf683543db40f51a9de90675"} err="failed to get container status \"44a25057fdf969828b6c665cf33c2bb656ef502adf683543db40f51a9de90675\": rpc error: code = NotFound desc = could not find container \"44a25057fdf969828b6c665cf33c2bb656ef502adf683543db40f51a9de90675\": container with ID starting with 44a25057fdf969828b6c665cf33c2bb656ef502adf683543db40f51a9de90675 not found: ID does not exist" Jan 31 04:45:38 crc kubenswrapper[4827]: I0131 04:45:38.119675 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b54ea009-caad-4013-becb-1ac3e39044e0" path="/var/lib/kubelet/pods/b54ea009-caad-4013-becb-1ac3e39044e0/volumes" Jan 31 04:46:03 crc kubenswrapper[4827]: I0131 04:46:03.028439 4827 scope.go:117] "RemoveContainer" containerID="44c6701133eac7a009bcefa3b0f53fce3a6fe1aeb58d760fa68ca41dfa8f873b" Jan 31 04:46:17 crc kubenswrapper[4827]: I0131 04:46:17.371120 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:46:17 crc kubenswrapper[4827]: I0131 04:46:17.372029 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:46:23 crc kubenswrapper[4827]: I0131 04:46:23.041549 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-p5kdd"] Jan 31 04:46:23 crc kubenswrapper[4827]: I0131 04:46:23.050301 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-b6b4-account-create-update-gpt9p"] Jan 31 04:46:23 crc kubenswrapper[4827]: I0131 04:46:23.060262 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-p5kdd"] Jan 31 04:46:23 crc kubenswrapper[4827]: I0131 04:46:23.069954 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-b6b4-account-create-update-gpt9p"] Jan 31 04:46:24 crc kubenswrapper[4827]: I0131 04:46:24.122985 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="735db063-6703-4f20-9d2f-aa79c7c56855" path="/var/lib/kubelet/pods/735db063-6703-4f20-9d2f-aa79c7c56855/volumes" Jan 31 04:46:24 crc kubenswrapper[4827]: I0131 04:46:24.124584 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b79385e0-f8b6-49d3-a1de-8a61ee7e52b4" path="/var/lib/kubelet/pods/b79385e0-f8b6-49d3-a1de-8a61ee7e52b4/volumes" Jan 31 04:46:47 crc kubenswrapper[4827]: I0131 04:46:47.371185 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:46:47 crc kubenswrapper[4827]: I0131 04:46:47.371798 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:46:52 crc kubenswrapper[4827]: I0131 04:46:52.052019 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-9c5ww"] Jan 31 04:46:52 crc kubenswrapper[4827]: I0131 04:46:52.062834 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-9c5ww"] Jan 31 04:46:52 crc kubenswrapper[4827]: I0131 04:46:52.134140 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c4fd939-69ab-4942-b829-5b4abab385db" path="/var/lib/kubelet/pods/0c4fd939-69ab-4942-b829-5b4abab385db/volumes" Jan 31 04:47:03 crc kubenswrapper[4827]: I0131 04:47:03.165573 4827 scope.go:117] "RemoveContainer" containerID="c6d4b03242ed3d43c917ca3f38999e5747b76f4cb1881d2cdc3b3da8068cff44" Jan 31 04:47:03 crc kubenswrapper[4827]: I0131 04:47:03.214127 4827 scope.go:117] "RemoveContainer" containerID="31bc0bed8694062a025bdfb49f11e9601127f9c2de18117382dd47fd200bbbc7" Jan 31 04:47:03 crc kubenswrapper[4827]: I0131 04:47:03.249594 4827 scope.go:117] "RemoveContainer" containerID="5441d1f16461b7fafdd58520e7b18918bccd646597dbae9fdcd8b480836d5971" Jan 31 04:47:17 crc kubenswrapper[4827]: I0131 04:47:17.371765 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:47:17 crc kubenswrapper[4827]: I0131 04:47:17.372450 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:47:17 crc kubenswrapper[4827]: I0131 04:47:17.372507 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 04:47:17 crc kubenswrapper[4827]: I0131 04:47:17.373380 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909"} pod="openshift-machine-config-operator/machine-config-daemon-jxh94" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:47:17 crc kubenswrapper[4827]: I0131 04:47:17.373445 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" containerID="cri-o://674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" gracePeriod=600 Jan 31 04:47:17 crc kubenswrapper[4827]: E0131 04:47:17.501797 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:47:17 crc kubenswrapper[4827]: I0131 04:47:17.554525 4827 generic.go:334] "Generic (PLEG): container finished" podID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" exitCode=0 Jan 31 04:47:17 crc kubenswrapper[4827]: I0131 04:47:17.554582 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerDied","Data":"674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909"} Jan 31 04:47:17 crc kubenswrapper[4827]: I0131 04:47:17.554628 4827 scope.go:117] "RemoveContainer" containerID="7954efcbb2345b683530b21570dd478327d15f837551cf0700c83e0134e75fef" Jan 31 04:47:17 crc kubenswrapper[4827]: I0131 04:47:17.555229 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:47:17 crc kubenswrapper[4827]: E0131 04:47:17.555626 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:47:30 crc kubenswrapper[4827]: I0131 04:47:30.110447 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:47:30 crc kubenswrapper[4827]: E0131 04:47:30.111365 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:47:44 crc kubenswrapper[4827]: I0131 04:47:44.110456 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:47:44 crc kubenswrapper[4827]: E0131 04:47:44.111487 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:47:56 crc kubenswrapper[4827]: I0131 04:47:56.110850 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:47:56 crc kubenswrapper[4827]: E0131 04:47:56.111940 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:48:07 crc kubenswrapper[4827]: I0131 04:48:07.110183 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:48:07 crc kubenswrapper[4827]: E0131 04:48:07.110940 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:48:22 crc kubenswrapper[4827]: I0131 04:48:22.110651 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:48:22 crc kubenswrapper[4827]: E0131 04:48:22.112034 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:48:37 crc kubenswrapper[4827]: I0131 04:48:37.110296 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:48:37 crc kubenswrapper[4827]: E0131 04:48:37.111113 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:48:45 crc kubenswrapper[4827]: I0131 04:48:45.212460 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tnjfj"] Jan 31 04:48:45 crc kubenswrapper[4827]: E0131 04:48:45.213650 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b54ea009-caad-4013-becb-1ac3e39044e0" containerName="extract-utilities" Jan 31 04:48:45 crc kubenswrapper[4827]: I0131 04:48:45.213663 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b54ea009-caad-4013-becb-1ac3e39044e0" containerName="extract-utilities" Jan 31 04:48:45 crc kubenswrapper[4827]: E0131 04:48:45.213685 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b54ea009-caad-4013-becb-1ac3e39044e0" containerName="registry-server" Jan 31 04:48:45 crc kubenswrapper[4827]: I0131 04:48:45.213691 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b54ea009-caad-4013-becb-1ac3e39044e0" containerName="registry-server" Jan 31 04:48:45 crc kubenswrapper[4827]: E0131 04:48:45.213703 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b54ea009-caad-4013-becb-1ac3e39044e0" containerName="extract-content" Jan 31 04:48:45 crc kubenswrapper[4827]: I0131 04:48:45.213709 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="b54ea009-caad-4013-becb-1ac3e39044e0" containerName="extract-content" Jan 31 04:48:45 crc kubenswrapper[4827]: I0131 04:48:45.213894 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="b54ea009-caad-4013-becb-1ac3e39044e0" containerName="registry-server" Jan 31 04:48:45 crc kubenswrapper[4827]: I0131 04:48:45.215190 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnjfj" Jan 31 04:48:45 crc kubenswrapper[4827]: I0131 04:48:45.229232 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tnjfj"] Jan 31 04:48:45 crc kubenswrapper[4827]: I0131 04:48:45.280540 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2df2dcd4-aaa3-42b9-8c64-12b3ca5579da-catalog-content\") pod \"certified-operators-tnjfj\" (UID: \"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da\") " pod="openshift-marketplace/certified-operators-tnjfj" Jan 31 04:48:45 crc kubenswrapper[4827]: I0131 04:48:45.280596 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2df2dcd4-aaa3-42b9-8c64-12b3ca5579da-utilities\") pod \"certified-operators-tnjfj\" (UID: \"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da\") " pod="openshift-marketplace/certified-operators-tnjfj" Jan 31 04:48:45 crc kubenswrapper[4827]: I0131 04:48:45.280667 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96pf6\" (UniqueName: \"kubernetes.io/projected/2df2dcd4-aaa3-42b9-8c64-12b3ca5579da-kube-api-access-96pf6\") pod \"certified-operators-tnjfj\" (UID: \"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da\") " pod="openshift-marketplace/certified-operators-tnjfj" Jan 31 04:48:45 crc kubenswrapper[4827]: I0131 04:48:45.382728 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2df2dcd4-aaa3-42b9-8c64-12b3ca5579da-catalog-content\") pod \"certified-operators-tnjfj\" (UID: \"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da\") " pod="openshift-marketplace/certified-operators-tnjfj" Jan 31 04:48:45 crc kubenswrapper[4827]: I0131 04:48:45.382794 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2df2dcd4-aaa3-42b9-8c64-12b3ca5579da-utilities\") pod \"certified-operators-tnjfj\" (UID: \"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da\") " pod="openshift-marketplace/certified-operators-tnjfj" Jan 31 04:48:45 crc kubenswrapper[4827]: I0131 04:48:45.382839 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96pf6\" (UniqueName: \"kubernetes.io/projected/2df2dcd4-aaa3-42b9-8c64-12b3ca5579da-kube-api-access-96pf6\") pod \"certified-operators-tnjfj\" (UID: \"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da\") " pod="openshift-marketplace/certified-operators-tnjfj" Jan 31 04:48:45 crc kubenswrapper[4827]: I0131 04:48:45.383566 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2df2dcd4-aaa3-42b9-8c64-12b3ca5579da-catalog-content\") pod \"certified-operators-tnjfj\" (UID: \"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da\") " pod="openshift-marketplace/certified-operators-tnjfj" Jan 31 04:48:45 crc kubenswrapper[4827]: I0131 04:48:45.383600 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2df2dcd4-aaa3-42b9-8c64-12b3ca5579da-utilities\") pod \"certified-operators-tnjfj\" (UID: \"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da\") " pod="openshift-marketplace/certified-operators-tnjfj" Jan 31 04:48:45 crc kubenswrapper[4827]: I0131 04:48:45.416776 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96pf6\" (UniqueName: \"kubernetes.io/projected/2df2dcd4-aaa3-42b9-8c64-12b3ca5579da-kube-api-access-96pf6\") pod \"certified-operators-tnjfj\" (UID: \"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da\") " pod="openshift-marketplace/certified-operators-tnjfj" Jan 31 04:48:45 crc kubenswrapper[4827]: I0131 04:48:45.557848 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnjfj" Jan 31 04:48:46 crc kubenswrapper[4827]: I0131 04:48:46.090846 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tnjfj"] Jan 31 04:48:46 crc kubenswrapper[4827]: I0131 04:48:46.333511 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnjfj" event={"ID":"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da","Type":"ContainerStarted","Data":"4f2430ed10d185e884d8571abeceec608229e093d9daf92845970cd825ac6ff2"} Jan 31 04:48:47 crc kubenswrapper[4827]: I0131 04:48:47.344912 4827 generic.go:334] "Generic (PLEG): container finished" podID="2df2dcd4-aaa3-42b9-8c64-12b3ca5579da" containerID="52407afe0cf76ba2e0355efa40c9b1ab1cc7fc74f1f0c13c252d0bb608f6a126" exitCode=0 Jan 31 04:48:47 crc kubenswrapper[4827]: I0131 04:48:47.345003 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnjfj" event={"ID":"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da","Type":"ContainerDied","Data":"52407afe0cf76ba2e0355efa40c9b1ab1cc7fc74f1f0c13c252d0bb608f6a126"} Jan 31 04:48:47 crc kubenswrapper[4827]: I0131 04:48:47.348423 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:48:48 crc kubenswrapper[4827]: I0131 04:48:48.357673 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnjfj" event={"ID":"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da","Type":"ContainerStarted","Data":"c74f50afac218759a75448aec4e60065a29cdfb4ef2c6db1c943ceddddcb65d1"} Jan 31 04:48:50 crc kubenswrapper[4827]: I0131 04:48:50.377349 4827 generic.go:334] "Generic (PLEG): container finished" podID="2df2dcd4-aaa3-42b9-8c64-12b3ca5579da" containerID="c74f50afac218759a75448aec4e60065a29cdfb4ef2c6db1c943ceddddcb65d1" exitCode=0 Jan 31 04:48:50 crc kubenswrapper[4827]: I0131 04:48:50.377404 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnjfj" event={"ID":"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da","Type":"ContainerDied","Data":"c74f50afac218759a75448aec4e60065a29cdfb4ef2c6db1c943ceddddcb65d1"} Jan 31 04:48:51 crc kubenswrapper[4827]: I0131 04:48:51.388124 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnjfj" event={"ID":"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da","Type":"ContainerStarted","Data":"8610b3544d14737ed853581d723f0f99c8e72cbdabbf28055b6b41f4f17e8a42"} Jan 31 04:48:51 crc kubenswrapper[4827]: I0131 04:48:51.419018 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tnjfj" podStartSLOduration=2.993538051 podStartE2EDuration="6.418987444s" podCreationTimestamp="2026-01-31 04:48:45 +0000 UTC" firstStartedPulling="2026-01-31 04:48:47.34813799 +0000 UTC m=+3720.035218439" lastFinishedPulling="2026-01-31 04:48:50.773587383 +0000 UTC m=+3723.460667832" observedRunningTime="2026-01-31 04:48:51.410468494 +0000 UTC m=+3724.097548943" watchObservedRunningTime="2026-01-31 04:48:51.418987444 +0000 UTC m=+3724.106067903" Jan 31 04:48:52 crc kubenswrapper[4827]: I0131 04:48:52.111026 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:48:52 crc kubenswrapper[4827]: E0131 04:48:52.111652 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:48:55 crc kubenswrapper[4827]: I0131 04:48:55.559701 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tnjfj" Jan 31 04:48:55 crc kubenswrapper[4827]: I0131 04:48:55.561057 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tnjfj" Jan 31 04:48:55 crc kubenswrapper[4827]: I0131 04:48:55.621854 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tnjfj" Jan 31 04:48:56 crc kubenswrapper[4827]: I0131 04:48:56.495996 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tnjfj" Jan 31 04:48:57 crc kubenswrapper[4827]: I0131 04:48:57.604504 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tnjfj"] Jan 31 04:48:58 crc kubenswrapper[4827]: I0131 04:48:58.442044 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tnjfj" podUID="2df2dcd4-aaa3-42b9-8c64-12b3ca5579da" containerName="registry-server" containerID="cri-o://8610b3544d14737ed853581d723f0f99c8e72cbdabbf28055b6b41f4f17e8a42" gracePeriod=2 Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.035057 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnjfj" Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.171375 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2df2dcd4-aaa3-42b9-8c64-12b3ca5579da-utilities\") pod \"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da\" (UID: \"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da\") " Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.171732 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2df2dcd4-aaa3-42b9-8c64-12b3ca5579da-catalog-content\") pod \"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da\" (UID: \"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da\") " Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.171791 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96pf6\" (UniqueName: \"kubernetes.io/projected/2df2dcd4-aaa3-42b9-8c64-12b3ca5579da-kube-api-access-96pf6\") pod \"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da\" (UID: \"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da\") " Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.172226 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2df2dcd4-aaa3-42b9-8c64-12b3ca5579da-utilities" (OuterVolumeSpecName: "utilities") pod "2df2dcd4-aaa3-42b9-8c64-12b3ca5579da" (UID: "2df2dcd4-aaa3-42b9-8c64-12b3ca5579da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.173243 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2df2dcd4-aaa3-42b9-8c64-12b3ca5579da-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.177846 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2df2dcd4-aaa3-42b9-8c64-12b3ca5579da-kube-api-access-96pf6" (OuterVolumeSpecName: "kube-api-access-96pf6") pod "2df2dcd4-aaa3-42b9-8c64-12b3ca5579da" (UID: "2df2dcd4-aaa3-42b9-8c64-12b3ca5579da"). InnerVolumeSpecName "kube-api-access-96pf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.223361 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2df2dcd4-aaa3-42b9-8c64-12b3ca5579da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2df2dcd4-aaa3-42b9-8c64-12b3ca5579da" (UID: "2df2dcd4-aaa3-42b9-8c64-12b3ca5579da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.274267 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2df2dcd4-aaa3-42b9-8c64-12b3ca5579da-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.274482 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96pf6\" (UniqueName: \"kubernetes.io/projected/2df2dcd4-aaa3-42b9-8c64-12b3ca5579da-kube-api-access-96pf6\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.457077 4827 generic.go:334] "Generic (PLEG): container finished" podID="2df2dcd4-aaa3-42b9-8c64-12b3ca5579da" containerID="8610b3544d14737ed853581d723f0f99c8e72cbdabbf28055b6b41f4f17e8a42" exitCode=0 Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.457126 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnjfj" event={"ID":"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da","Type":"ContainerDied","Data":"8610b3544d14737ed853581d723f0f99c8e72cbdabbf28055b6b41f4f17e8a42"} Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.457157 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnjfj" event={"ID":"2df2dcd4-aaa3-42b9-8c64-12b3ca5579da","Type":"ContainerDied","Data":"4f2430ed10d185e884d8571abeceec608229e093d9daf92845970cd825ac6ff2"} Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.457178 4827 scope.go:117] "RemoveContainer" containerID="8610b3544d14737ed853581d723f0f99c8e72cbdabbf28055b6b41f4f17e8a42" Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.457328 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnjfj" Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.499637 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tnjfj"] Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.499855 4827 scope.go:117] "RemoveContainer" containerID="c74f50afac218759a75448aec4e60065a29cdfb4ef2c6db1c943ceddddcb65d1" Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.507906 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tnjfj"] Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.541725 4827 scope.go:117] "RemoveContainer" containerID="52407afe0cf76ba2e0355efa40c9b1ab1cc7fc74f1f0c13c252d0bb608f6a126" Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.580369 4827 scope.go:117] "RemoveContainer" containerID="8610b3544d14737ed853581d723f0f99c8e72cbdabbf28055b6b41f4f17e8a42" Jan 31 04:48:59 crc kubenswrapper[4827]: E0131 04:48:59.580836 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8610b3544d14737ed853581d723f0f99c8e72cbdabbf28055b6b41f4f17e8a42\": container with ID starting with 8610b3544d14737ed853581d723f0f99c8e72cbdabbf28055b6b41f4f17e8a42 not found: ID does not exist" containerID="8610b3544d14737ed853581d723f0f99c8e72cbdabbf28055b6b41f4f17e8a42" Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.580953 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8610b3544d14737ed853581d723f0f99c8e72cbdabbf28055b6b41f4f17e8a42"} err="failed to get container status \"8610b3544d14737ed853581d723f0f99c8e72cbdabbf28055b6b41f4f17e8a42\": rpc error: code = NotFound desc = could not find container \"8610b3544d14737ed853581d723f0f99c8e72cbdabbf28055b6b41f4f17e8a42\": container with ID starting with 8610b3544d14737ed853581d723f0f99c8e72cbdabbf28055b6b41f4f17e8a42 not found: ID does not exist" Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.580999 4827 scope.go:117] "RemoveContainer" containerID="c74f50afac218759a75448aec4e60065a29cdfb4ef2c6db1c943ceddddcb65d1" Jan 31 04:48:59 crc kubenswrapper[4827]: E0131 04:48:59.581397 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c74f50afac218759a75448aec4e60065a29cdfb4ef2c6db1c943ceddddcb65d1\": container with ID starting with c74f50afac218759a75448aec4e60065a29cdfb4ef2c6db1c943ceddddcb65d1 not found: ID does not exist" containerID="c74f50afac218759a75448aec4e60065a29cdfb4ef2c6db1c943ceddddcb65d1" Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.581448 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c74f50afac218759a75448aec4e60065a29cdfb4ef2c6db1c943ceddddcb65d1"} err="failed to get container status \"c74f50afac218759a75448aec4e60065a29cdfb4ef2c6db1c943ceddddcb65d1\": rpc error: code = NotFound desc = could not find container \"c74f50afac218759a75448aec4e60065a29cdfb4ef2c6db1c943ceddddcb65d1\": container with ID starting with c74f50afac218759a75448aec4e60065a29cdfb4ef2c6db1c943ceddddcb65d1 not found: ID does not exist" Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.581483 4827 scope.go:117] "RemoveContainer" containerID="52407afe0cf76ba2e0355efa40c9b1ab1cc7fc74f1f0c13c252d0bb608f6a126" Jan 31 04:48:59 crc kubenswrapper[4827]: E0131 04:48:59.581948 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52407afe0cf76ba2e0355efa40c9b1ab1cc7fc74f1f0c13c252d0bb608f6a126\": container with ID starting with 52407afe0cf76ba2e0355efa40c9b1ab1cc7fc74f1f0c13c252d0bb608f6a126 not found: ID does not exist" containerID="52407afe0cf76ba2e0355efa40c9b1ab1cc7fc74f1f0c13c252d0bb608f6a126" Jan 31 04:48:59 crc kubenswrapper[4827]: I0131 04:48:59.581995 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52407afe0cf76ba2e0355efa40c9b1ab1cc7fc74f1f0c13c252d0bb608f6a126"} err="failed to get container status \"52407afe0cf76ba2e0355efa40c9b1ab1cc7fc74f1f0c13c252d0bb608f6a126\": rpc error: code = NotFound desc = could not find container \"52407afe0cf76ba2e0355efa40c9b1ab1cc7fc74f1f0c13c252d0bb608f6a126\": container with ID starting with 52407afe0cf76ba2e0355efa40c9b1ab1cc7fc74f1f0c13c252d0bb608f6a126 not found: ID does not exist" Jan 31 04:49:00 crc kubenswrapper[4827]: I0131 04:49:00.121838 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2df2dcd4-aaa3-42b9-8c64-12b3ca5579da" path="/var/lib/kubelet/pods/2df2dcd4-aaa3-42b9-8c64-12b3ca5579da/volumes" Jan 31 04:49:04 crc kubenswrapper[4827]: I0131 04:49:04.110191 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:49:04 crc kubenswrapper[4827]: E0131 04:49:04.111169 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:49:16 crc kubenswrapper[4827]: I0131 04:49:16.111219 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:49:16 crc kubenswrapper[4827]: E0131 04:49:16.111863 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:49:31 crc kubenswrapper[4827]: I0131 04:49:31.110576 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:49:31 crc kubenswrapper[4827]: E0131 04:49:31.111648 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:49:43 crc kubenswrapper[4827]: I0131 04:49:43.110495 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:49:43 crc kubenswrapper[4827]: E0131 04:49:43.111260 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:49:58 crc kubenswrapper[4827]: I0131 04:49:58.123981 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:49:58 crc kubenswrapper[4827]: E0131 04:49:58.125145 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:50:11 crc kubenswrapper[4827]: I0131 04:50:11.110359 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:50:11 crc kubenswrapper[4827]: E0131 04:50:11.111435 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:50:25 crc kubenswrapper[4827]: I0131 04:50:25.110399 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:50:25 crc kubenswrapper[4827]: E0131 04:50:25.111569 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:50:40 crc kubenswrapper[4827]: I0131 04:50:40.110237 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:50:40 crc kubenswrapper[4827]: E0131 04:50:40.111663 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:50:54 crc kubenswrapper[4827]: I0131 04:50:54.110483 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:50:54 crc kubenswrapper[4827]: E0131 04:50:54.111317 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:51:05 crc kubenswrapper[4827]: I0131 04:51:05.110578 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:51:05 crc kubenswrapper[4827]: E0131 04:51:05.111365 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:51:17 crc kubenswrapper[4827]: I0131 04:51:17.110150 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:51:17 crc kubenswrapper[4827]: E0131 04:51:17.110997 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:51:27 crc kubenswrapper[4827]: I0131 04:51:27.957950 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nltnp"] Jan 31 04:51:27 crc kubenswrapper[4827]: E0131 04:51:27.959074 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2df2dcd4-aaa3-42b9-8c64-12b3ca5579da" containerName="extract-content" Jan 31 04:51:27 crc kubenswrapper[4827]: I0131 04:51:27.959091 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="2df2dcd4-aaa3-42b9-8c64-12b3ca5579da" containerName="extract-content" Jan 31 04:51:27 crc kubenswrapper[4827]: E0131 04:51:27.959123 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2df2dcd4-aaa3-42b9-8c64-12b3ca5579da" containerName="extract-utilities" Jan 31 04:51:27 crc kubenswrapper[4827]: I0131 04:51:27.959133 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="2df2dcd4-aaa3-42b9-8c64-12b3ca5579da" containerName="extract-utilities" Jan 31 04:51:27 crc kubenswrapper[4827]: E0131 04:51:27.959143 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2df2dcd4-aaa3-42b9-8c64-12b3ca5579da" containerName="registry-server" Jan 31 04:51:27 crc kubenswrapper[4827]: I0131 04:51:27.959151 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="2df2dcd4-aaa3-42b9-8c64-12b3ca5579da" containerName="registry-server" Jan 31 04:51:27 crc kubenswrapper[4827]: I0131 04:51:27.959408 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="2df2dcd4-aaa3-42b9-8c64-12b3ca5579da" containerName="registry-server" Jan 31 04:51:27 crc kubenswrapper[4827]: I0131 04:51:27.963021 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nltnp" Jan 31 04:51:27 crc kubenswrapper[4827]: I0131 04:51:27.971543 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nltnp"] Jan 31 04:51:28 crc kubenswrapper[4827]: I0131 04:51:28.053868 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff72b39a-0788-4c7c-bd31-910f6865788d-utilities\") pod \"community-operators-nltnp\" (UID: \"ff72b39a-0788-4c7c-bd31-910f6865788d\") " pod="openshift-marketplace/community-operators-nltnp" Jan 31 04:51:28 crc kubenswrapper[4827]: I0131 04:51:28.054012 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff72b39a-0788-4c7c-bd31-910f6865788d-catalog-content\") pod \"community-operators-nltnp\" (UID: \"ff72b39a-0788-4c7c-bd31-910f6865788d\") " pod="openshift-marketplace/community-operators-nltnp" Jan 31 04:51:28 crc kubenswrapper[4827]: I0131 04:51:28.054040 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rlkz\" (UniqueName: \"kubernetes.io/projected/ff72b39a-0788-4c7c-bd31-910f6865788d-kube-api-access-4rlkz\") pod \"community-operators-nltnp\" (UID: \"ff72b39a-0788-4c7c-bd31-910f6865788d\") " pod="openshift-marketplace/community-operators-nltnp" Jan 31 04:51:28 crc kubenswrapper[4827]: I0131 04:51:28.155790 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff72b39a-0788-4c7c-bd31-910f6865788d-catalog-content\") pod \"community-operators-nltnp\" (UID: \"ff72b39a-0788-4c7c-bd31-910f6865788d\") " pod="openshift-marketplace/community-operators-nltnp" Jan 31 04:51:28 crc kubenswrapper[4827]: I0131 04:51:28.155845 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rlkz\" (UniqueName: \"kubernetes.io/projected/ff72b39a-0788-4c7c-bd31-910f6865788d-kube-api-access-4rlkz\") pod \"community-operators-nltnp\" (UID: \"ff72b39a-0788-4c7c-bd31-910f6865788d\") " pod="openshift-marketplace/community-operators-nltnp" Jan 31 04:51:28 crc kubenswrapper[4827]: I0131 04:51:28.156107 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff72b39a-0788-4c7c-bd31-910f6865788d-utilities\") pod \"community-operators-nltnp\" (UID: \"ff72b39a-0788-4c7c-bd31-910f6865788d\") " pod="openshift-marketplace/community-operators-nltnp" Jan 31 04:51:28 crc kubenswrapper[4827]: I0131 04:51:28.157053 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff72b39a-0788-4c7c-bd31-910f6865788d-utilities\") pod \"community-operators-nltnp\" (UID: \"ff72b39a-0788-4c7c-bd31-910f6865788d\") " pod="openshift-marketplace/community-operators-nltnp" Jan 31 04:51:28 crc kubenswrapper[4827]: I0131 04:51:28.157251 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff72b39a-0788-4c7c-bd31-910f6865788d-catalog-content\") pod \"community-operators-nltnp\" (UID: \"ff72b39a-0788-4c7c-bd31-910f6865788d\") " pod="openshift-marketplace/community-operators-nltnp" Jan 31 04:51:28 crc kubenswrapper[4827]: I0131 04:51:28.193205 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rlkz\" (UniqueName: \"kubernetes.io/projected/ff72b39a-0788-4c7c-bd31-910f6865788d-kube-api-access-4rlkz\") pod \"community-operators-nltnp\" (UID: \"ff72b39a-0788-4c7c-bd31-910f6865788d\") " pod="openshift-marketplace/community-operators-nltnp" Jan 31 04:51:28 crc kubenswrapper[4827]: I0131 04:51:28.287922 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nltnp" Jan 31 04:51:28 crc kubenswrapper[4827]: I0131 04:51:28.906730 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nltnp"] Jan 31 04:51:29 crc kubenswrapper[4827]: I0131 04:51:29.744410 4827 generic.go:334] "Generic (PLEG): container finished" podID="ff72b39a-0788-4c7c-bd31-910f6865788d" containerID="c0d1a99aec6224cf6cfe96cc4f7429f8a409821e0f144c575a811d6c9d31d402" exitCode=0 Jan 31 04:51:29 crc kubenswrapper[4827]: I0131 04:51:29.744502 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nltnp" event={"ID":"ff72b39a-0788-4c7c-bd31-910f6865788d","Type":"ContainerDied","Data":"c0d1a99aec6224cf6cfe96cc4f7429f8a409821e0f144c575a811d6c9d31d402"} Jan 31 04:51:29 crc kubenswrapper[4827]: I0131 04:51:29.744698 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nltnp" event={"ID":"ff72b39a-0788-4c7c-bd31-910f6865788d","Type":"ContainerStarted","Data":"6f27e3da193b8425d41cb141989a43b75f5f9d59f110b62f87bba75386e6d833"} Jan 31 04:51:31 crc kubenswrapper[4827]: I0131 04:51:31.110958 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:51:31 crc kubenswrapper[4827]: E0131 04:51:31.111864 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:51:31 crc kubenswrapper[4827]: I0131 04:51:31.766126 4827 generic.go:334] "Generic (PLEG): container finished" podID="ff72b39a-0788-4c7c-bd31-910f6865788d" containerID="293db383a30b18db70842d1077a566b4ea48685b37705afb947bb423b87a415a" exitCode=0 Jan 31 04:51:31 crc kubenswrapper[4827]: I0131 04:51:31.766177 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nltnp" event={"ID":"ff72b39a-0788-4c7c-bd31-910f6865788d","Type":"ContainerDied","Data":"293db383a30b18db70842d1077a566b4ea48685b37705afb947bb423b87a415a"} Jan 31 04:51:32 crc kubenswrapper[4827]: I0131 04:51:32.776261 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nltnp" event={"ID":"ff72b39a-0788-4c7c-bd31-910f6865788d","Type":"ContainerStarted","Data":"6fd80502e1f473cf77bce41f9d852cc3b20e217b673c9bd60b8346c7ddab7ad4"} Jan 31 04:51:32 crc kubenswrapper[4827]: I0131 04:51:32.796731 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nltnp" podStartSLOduration=3.384724827 podStartE2EDuration="5.796710955s" podCreationTimestamp="2026-01-31 04:51:27 +0000 UTC" firstStartedPulling="2026-01-31 04:51:29.746329421 +0000 UTC m=+3882.433409870" lastFinishedPulling="2026-01-31 04:51:32.158315549 +0000 UTC m=+3884.845395998" observedRunningTime="2026-01-31 04:51:32.792708691 +0000 UTC m=+3885.479789140" watchObservedRunningTime="2026-01-31 04:51:32.796710955 +0000 UTC m=+3885.483791404" Jan 31 04:51:38 crc kubenswrapper[4827]: I0131 04:51:38.288391 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nltnp" Jan 31 04:51:38 crc kubenswrapper[4827]: I0131 04:51:38.288856 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nltnp" Jan 31 04:51:38 crc kubenswrapper[4827]: I0131 04:51:38.358314 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nltnp" Jan 31 04:51:38 crc kubenswrapper[4827]: I0131 04:51:38.893730 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nltnp" Jan 31 04:51:38 crc kubenswrapper[4827]: I0131 04:51:38.952114 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nltnp"] Jan 31 04:51:40 crc kubenswrapper[4827]: I0131 04:51:40.850110 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nltnp" podUID="ff72b39a-0788-4c7c-bd31-910f6865788d" containerName="registry-server" containerID="cri-o://6fd80502e1f473cf77bce41f9d852cc3b20e217b673c9bd60b8346c7ddab7ad4" gracePeriod=2 Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.582511 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nltnp" Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.636369 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rlkz\" (UniqueName: \"kubernetes.io/projected/ff72b39a-0788-4c7c-bd31-910f6865788d-kube-api-access-4rlkz\") pod \"ff72b39a-0788-4c7c-bd31-910f6865788d\" (UID: \"ff72b39a-0788-4c7c-bd31-910f6865788d\") " Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.636561 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff72b39a-0788-4c7c-bd31-910f6865788d-catalog-content\") pod \"ff72b39a-0788-4c7c-bd31-910f6865788d\" (UID: \"ff72b39a-0788-4c7c-bd31-910f6865788d\") " Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.636659 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff72b39a-0788-4c7c-bd31-910f6865788d-utilities\") pod \"ff72b39a-0788-4c7c-bd31-910f6865788d\" (UID: \"ff72b39a-0788-4c7c-bd31-910f6865788d\") " Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.637622 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff72b39a-0788-4c7c-bd31-910f6865788d-utilities" (OuterVolumeSpecName: "utilities") pod "ff72b39a-0788-4c7c-bd31-910f6865788d" (UID: "ff72b39a-0788-4c7c-bd31-910f6865788d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.643959 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff72b39a-0788-4c7c-bd31-910f6865788d-kube-api-access-4rlkz" (OuterVolumeSpecName: "kube-api-access-4rlkz") pod "ff72b39a-0788-4c7c-bd31-910f6865788d" (UID: "ff72b39a-0788-4c7c-bd31-910f6865788d"). InnerVolumeSpecName "kube-api-access-4rlkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.739731 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff72b39a-0788-4c7c-bd31-910f6865788d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.739771 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rlkz\" (UniqueName: \"kubernetes.io/projected/ff72b39a-0788-4c7c-bd31-910f6865788d-kube-api-access-4rlkz\") on node \"crc\" DevicePath \"\"" Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.769332 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff72b39a-0788-4c7c-bd31-910f6865788d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff72b39a-0788-4c7c-bd31-910f6865788d" (UID: "ff72b39a-0788-4c7c-bd31-910f6865788d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.840933 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff72b39a-0788-4c7c-bd31-910f6865788d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.862621 4827 generic.go:334] "Generic (PLEG): container finished" podID="ff72b39a-0788-4c7c-bd31-910f6865788d" containerID="6fd80502e1f473cf77bce41f9d852cc3b20e217b673c9bd60b8346c7ddab7ad4" exitCode=0 Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.862691 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nltnp" event={"ID":"ff72b39a-0788-4c7c-bd31-910f6865788d","Type":"ContainerDied","Data":"6fd80502e1f473cf77bce41f9d852cc3b20e217b673c9bd60b8346c7ddab7ad4"} Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.862727 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nltnp" Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.862756 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nltnp" event={"ID":"ff72b39a-0788-4c7c-bd31-910f6865788d","Type":"ContainerDied","Data":"6f27e3da193b8425d41cb141989a43b75f5f9d59f110b62f87bba75386e6d833"} Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.862796 4827 scope.go:117] "RemoveContainer" containerID="6fd80502e1f473cf77bce41f9d852cc3b20e217b673c9bd60b8346c7ddab7ad4" Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.883806 4827 scope.go:117] "RemoveContainer" containerID="293db383a30b18db70842d1077a566b4ea48685b37705afb947bb423b87a415a" Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.905296 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nltnp"] Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.916158 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nltnp"] Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.925389 4827 scope.go:117] "RemoveContainer" containerID="c0d1a99aec6224cf6cfe96cc4f7429f8a409821e0f144c575a811d6c9d31d402" Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.980677 4827 scope.go:117] "RemoveContainer" containerID="6fd80502e1f473cf77bce41f9d852cc3b20e217b673c9bd60b8346c7ddab7ad4" Jan 31 04:51:41 crc kubenswrapper[4827]: E0131 04:51:41.981173 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fd80502e1f473cf77bce41f9d852cc3b20e217b673c9bd60b8346c7ddab7ad4\": container with ID starting with 6fd80502e1f473cf77bce41f9d852cc3b20e217b673c9bd60b8346c7ddab7ad4 not found: ID does not exist" containerID="6fd80502e1f473cf77bce41f9d852cc3b20e217b673c9bd60b8346c7ddab7ad4" Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.981241 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fd80502e1f473cf77bce41f9d852cc3b20e217b673c9bd60b8346c7ddab7ad4"} err="failed to get container status \"6fd80502e1f473cf77bce41f9d852cc3b20e217b673c9bd60b8346c7ddab7ad4\": rpc error: code = NotFound desc = could not find container \"6fd80502e1f473cf77bce41f9d852cc3b20e217b673c9bd60b8346c7ddab7ad4\": container with ID starting with 6fd80502e1f473cf77bce41f9d852cc3b20e217b673c9bd60b8346c7ddab7ad4 not found: ID does not exist" Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.981280 4827 scope.go:117] "RemoveContainer" containerID="293db383a30b18db70842d1077a566b4ea48685b37705afb947bb423b87a415a" Jan 31 04:51:41 crc kubenswrapper[4827]: E0131 04:51:41.981821 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"293db383a30b18db70842d1077a566b4ea48685b37705afb947bb423b87a415a\": container with ID starting with 293db383a30b18db70842d1077a566b4ea48685b37705afb947bb423b87a415a not found: ID does not exist" containerID="293db383a30b18db70842d1077a566b4ea48685b37705afb947bb423b87a415a" Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.981868 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"293db383a30b18db70842d1077a566b4ea48685b37705afb947bb423b87a415a"} err="failed to get container status \"293db383a30b18db70842d1077a566b4ea48685b37705afb947bb423b87a415a\": rpc error: code = NotFound desc = could not find container \"293db383a30b18db70842d1077a566b4ea48685b37705afb947bb423b87a415a\": container with ID starting with 293db383a30b18db70842d1077a566b4ea48685b37705afb947bb423b87a415a not found: ID does not exist" Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.981939 4827 scope.go:117] "RemoveContainer" containerID="c0d1a99aec6224cf6cfe96cc4f7429f8a409821e0f144c575a811d6c9d31d402" Jan 31 04:51:41 crc kubenswrapper[4827]: E0131 04:51:41.982379 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0d1a99aec6224cf6cfe96cc4f7429f8a409821e0f144c575a811d6c9d31d402\": container with ID starting with c0d1a99aec6224cf6cfe96cc4f7429f8a409821e0f144c575a811d6c9d31d402 not found: ID does not exist" containerID="c0d1a99aec6224cf6cfe96cc4f7429f8a409821e0f144c575a811d6c9d31d402" Jan 31 04:51:41 crc kubenswrapper[4827]: I0131 04:51:41.982423 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0d1a99aec6224cf6cfe96cc4f7429f8a409821e0f144c575a811d6c9d31d402"} err="failed to get container status \"c0d1a99aec6224cf6cfe96cc4f7429f8a409821e0f144c575a811d6c9d31d402\": rpc error: code = NotFound desc = could not find container \"c0d1a99aec6224cf6cfe96cc4f7429f8a409821e0f144c575a811d6c9d31d402\": container with ID starting with c0d1a99aec6224cf6cfe96cc4f7429f8a409821e0f144c575a811d6c9d31d402 not found: ID does not exist" Jan 31 04:51:42 crc kubenswrapper[4827]: I0131 04:51:42.125935 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff72b39a-0788-4c7c-bd31-910f6865788d" path="/var/lib/kubelet/pods/ff72b39a-0788-4c7c-bd31-910f6865788d/volumes" Jan 31 04:51:43 crc kubenswrapper[4827]: I0131 04:51:43.109847 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:51:43 crc kubenswrapper[4827]: E0131 04:51:43.111397 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:51:56 crc kubenswrapper[4827]: I0131 04:51:56.109941 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:51:56 crc kubenswrapper[4827]: E0131 04:51:56.110557 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:52:09 crc kubenswrapper[4827]: I0131 04:52:09.109833 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:52:09 crc kubenswrapper[4827]: E0131 04:52:09.110774 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:52:20 crc kubenswrapper[4827]: I0131 04:52:20.111668 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:52:21 crc kubenswrapper[4827]: I0131 04:52:21.264104 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"d56b695f7df6286e4426187ed83c5f72a0a776a98361c8e832a46f510a0923e7"} Jan 31 04:54:47 crc kubenswrapper[4827]: I0131 04:54:47.371210 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:54:47 crc kubenswrapper[4827]: I0131 04:54:47.371784 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:55:17 crc kubenswrapper[4827]: I0131 04:55:17.371463 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:55:17 crc kubenswrapper[4827]: I0131 04:55:17.372123 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:55:19 crc kubenswrapper[4827]: I0131 04:55:19.770108 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vwxgb"] Jan 31 04:55:19 crc kubenswrapper[4827]: E0131 04:55:19.771218 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff72b39a-0788-4c7c-bd31-910f6865788d" containerName="extract-utilities" Jan 31 04:55:19 crc kubenswrapper[4827]: I0131 04:55:19.771242 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff72b39a-0788-4c7c-bd31-910f6865788d" containerName="extract-utilities" Jan 31 04:55:19 crc kubenswrapper[4827]: E0131 04:55:19.771318 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff72b39a-0788-4c7c-bd31-910f6865788d" containerName="extract-content" Jan 31 04:55:19 crc kubenswrapper[4827]: I0131 04:55:19.771332 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff72b39a-0788-4c7c-bd31-910f6865788d" containerName="extract-content" Jan 31 04:55:19 crc kubenswrapper[4827]: E0131 04:55:19.771358 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff72b39a-0788-4c7c-bd31-910f6865788d" containerName="registry-server" Jan 31 04:55:19 crc kubenswrapper[4827]: I0131 04:55:19.771371 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff72b39a-0788-4c7c-bd31-910f6865788d" containerName="registry-server" Jan 31 04:55:19 crc kubenswrapper[4827]: I0131 04:55:19.771651 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff72b39a-0788-4c7c-bd31-910f6865788d" containerName="registry-server" Jan 31 04:55:19 crc kubenswrapper[4827]: I0131 04:55:19.773472 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vwxgb" Jan 31 04:55:19 crc kubenswrapper[4827]: I0131 04:55:19.816284 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vwxgb"] Jan 31 04:55:19 crc kubenswrapper[4827]: I0131 04:55:19.927238 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/174ec074-ed7c-4989-9d17-6efc73f14843-catalog-content\") pod \"redhat-operators-vwxgb\" (UID: \"174ec074-ed7c-4989-9d17-6efc73f14843\") " pod="openshift-marketplace/redhat-operators-vwxgb" Jan 31 04:55:19 crc kubenswrapper[4827]: I0131 04:55:19.927615 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/174ec074-ed7c-4989-9d17-6efc73f14843-utilities\") pod \"redhat-operators-vwxgb\" (UID: \"174ec074-ed7c-4989-9d17-6efc73f14843\") " pod="openshift-marketplace/redhat-operators-vwxgb" Jan 31 04:55:19 crc kubenswrapper[4827]: I0131 04:55:19.927714 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvtq9\" (UniqueName: \"kubernetes.io/projected/174ec074-ed7c-4989-9d17-6efc73f14843-kube-api-access-vvtq9\") pod \"redhat-operators-vwxgb\" (UID: \"174ec074-ed7c-4989-9d17-6efc73f14843\") " pod="openshift-marketplace/redhat-operators-vwxgb" Jan 31 04:55:20 crc kubenswrapper[4827]: I0131 04:55:20.030207 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/174ec074-ed7c-4989-9d17-6efc73f14843-catalog-content\") pod \"redhat-operators-vwxgb\" (UID: \"174ec074-ed7c-4989-9d17-6efc73f14843\") " pod="openshift-marketplace/redhat-operators-vwxgb" Jan 31 04:55:20 crc kubenswrapper[4827]: I0131 04:55:20.030277 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/174ec074-ed7c-4989-9d17-6efc73f14843-utilities\") pod \"redhat-operators-vwxgb\" (UID: \"174ec074-ed7c-4989-9d17-6efc73f14843\") " pod="openshift-marketplace/redhat-operators-vwxgb" Jan 31 04:55:20 crc kubenswrapper[4827]: I0131 04:55:20.030335 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtq9\" (UniqueName: \"kubernetes.io/projected/174ec074-ed7c-4989-9d17-6efc73f14843-kube-api-access-vvtq9\") pod \"redhat-operators-vwxgb\" (UID: \"174ec074-ed7c-4989-9d17-6efc73f14843\") " pod="openshift-marketplace/redhat-operators-vwxgb" Jan 31 04:55:20 crc kubenswrapper[4827]: I0131 04:55:20.030781 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/174ec074-ed7c-4989-9d17-6efc73f14843-catalog-content\") pod \"redhat-operators-vwxgb\" (UID: \"174ec074-ed7c-4989-9d17-6efc73f14843\") " pod="openshift-marketplace/redhat-operators-vwxgb" Jan 31 04:55:20 crc kubenswrapper[4827]: I0131 04:55:20.030803 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/174ec074-ed7c-4989-9d17-6efc73f14843-utilities\") pod \"redhat-operators-vwxgb\" (UID: \"174ec074-ed7c-4989-9d17-6efc73f14843\") " pod="openshift-marketplace/redhat-operators-vwxgb" Jan 31 04:55:20 crc kubenswrapper[4827]: I0131 04:55:20.288095 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvtq9\" (UniqueName: \"kubernetes.io/projected/174ec074-ed7c-4989-9d17-6efc73f14843-kube-api-access-vvtq9\") pod \"redhat-operators-vwxgb\" (UID: \"174ec074-ed7c-4989-9d17-6efc73f14843\") " pod="openshift-marketplace/redhat-operators-vwxgb" Jan 31 04:55:20 crc kubenswrapper[4827]: I0131 04:55:20.421608 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vwxgb" Jan 31 04:55:20 crc kubenswrapper[4827]: I0131 04:55:20.927175 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vwxgb"] Jan 31 04:55:21 crc kubenswrapper[4827]: I0131 04:55:21.926487 4827 generic.go:334] "Generic (PLEG): container finished" podID="174ec074-ed7c-4989-9d17-6efc73f14843" containerID="140d312f9d48b0cd392e7d3d5a549755e26a99c3ebf783338fb43900321a298a" exitCode=0 Jan 31 04:55:21 crc kubenswrapper[4827]: I0131 04:55:21.926683 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwxgb" event={"ID":"174ec074-ed7c-4989-9d17-6efc73f14843","Type":"ContainerDied","Data":"140d312f9d48b0cd392e7d3d5a549755e26a99c3ebf783338fb43900321a298a"} Jan 31 04:55:21 crc kubenswrapper[4827]: I0131 04:55:21.926931 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwxgb" event={"ID":"174ec074-ed7c-4989-9d17-6efc73f14843","Type":"ContainerStarted","Data":"567bc969b03207d8a230a4e71cd97c78cf9fa047103d27a39804d3b78ec0a327"} Jan 31 04:55:21 crc kubenswrapper[4827]: I0131 04:55:21.930401 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:55:22 crc kubenswrapper[4827]: I0131 04:55:22.936669 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwxgb" event={"ID":"174ec074-ed7c-4989-9d17-6efc73f14843","Type":"ContainerStarted","Data":"da594a90beddb875b985b26f53726267dda7d76c2d959278211b8453ca41ba12"} Jan 31 04:55:25 crc kubenswrapper[4827]: I0131 04:55:25.965739 4827 generic.go:334] "Generic (PLEG): container finished" podID="174ec074-ed7c-4989-9d17-6efc73f14843" containerID="da594a90beddb875b985b26f53726267dda7d76c2d959278211b8453ca41ba12" exitCode=0 Jan 31 04:55:25 crc kubenswrapper[4827]: I0131 04:55:25.966446 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwxgb" event={"ID":"174ec074-ed7c-4989-9d17-6efc73f14843","Type":"ContainerDied","Data":"da594a90beddb875b985b26f53726267dda7d76c2d959278211b8453ca41ba12"} Jan 31 04:55:26 crc kubenswrapper[4827]: I0131 04:55:26.975001 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwxgb" event={"ID":"174ec074-ed7c-4989-9d17-6efc73f14843","Type":"ContainerStarted","Data":"7d250919f8ee71b15edb065748b694b05eea0fcc233742894f563d5be3c33209"} Jan 31 04:55:27 crc kubenswrapper[4827]: I0131 04:55:27.000756 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vwxgb" podStartSLOduration=3.556025133 podStartE2EDuration="8.000733958s" podCreationTimestamp="2026-01-31 04:55:19 +0000 UTC" firstStartedPulling="2026-01-31 04:55:21.92999038 +0000 UTC m=+4114.617070859" lastFinishedPulling="2026-01-31 04:55:26.374699235 +0000 UTC m=+4119.061779684" observedRunningTime="2026-01-31 04:55:26.995429844 +0000 UTC m=+4119.682510303" watchObservedRunningTime="2026-01-31 04:55:27.000733958 +0000 UTC m=+4119.687814407" Jan 31 04:55:30 crc kubenswrapper[4827]: I0131 04:55:30.422845 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vwxgb" Jan 31 04:55:30 crc kubenswrapper[4827]: I0131 04:55:30.424313 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vwxgb" Jan 31 04:55:31 crc kubenswrapper[4827]: I0131 04:55:31.828290 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vwxgb" podUID="174ec074-ed7c-4989-9d17-6efc73f14843" containerName="registry-server" probeResult="failure" output=< Jan 31 04:55:31 crc kubenswrapper[4827]: timeout: failed to connect service ":50051" within 1s Jan 31 04:55:31 crc kubenswrapper[4827]: > Jan 31 04:55:39 crc kubenswrapper[4827]: I0131 04:55:39.161341 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rjmfl"] Jan 31 04:55:39 crc kubenswrapper[4827]: I0131 04:55:39.165345 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjmfl" Jan 31 04:55:39 crc kubenswrapper[4827]: I0131 04:55:39.219572 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f7ba251-3cb7-43a5-9e0e-6de6285d56b0-catalog-content\") pod \"redhat-marketplace-rjmfl\" (UID: \"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0\") " pod="openshift-marketplace/redhat-marketplace-rjmfl" Jan 31 04:55:39 crc kubenswrapper[4827]: I0131 04:55:39.220417 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f7ba251-3cb7-43a5-9e0e-6de6285d56b0-utilities\") pod \"redhat-marketplace-rjmfl\" (UID: \"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0\") " pod="openshift-marketplace/redhat-marketplace-rjmfl" Jan 31 04:55:39 crc kubenswrapper[4827]: I0131 04:55:39.220540 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xtbx\" (UniqueName: \"kubernetes.io/projected/9f7ba251-3cb7-43a5-9e0e-6de6285d56b0-kube-api-access-9xtbx\") pod \"redhat-marketplace-rjmfl\" (UID: \"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0\") " pod="openshift-marketplace/redhat-marketplace-rjmfl" Jan 31 04:55:39 crc kubenswrapper[4827]: I0131 04:55:39.229391 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjmfl"] Jan 31 04:55:39 crc kubenswrapper[4827]: I0131 04:55:39.322544 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f7ba251-3cb7-43a5-9e0e-6de6285d56b0-catalog-content\") pod \"redhat-marketplace-rjmfl\" (UID: \"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0\") " pod="openshift-marketplace/redhat-marketplace-rjmfl" Jan 31 04:55:39 crc kubenswrapper[4827]: I0131 04:55:39.322609 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f7ba251-3cb7-43a5-9e0e-6de6285d56b0-utilities\") pod \"redhat-marketplace-rjmfl\" (UID: \"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0\") " pod="openshift-marketplace/redhat-marketplace-rjmfl" Jan 31 04:55:39 crc kubenswrapper[4827]: I0131 04:55:39.322654 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xtbx\" (UniqueName: \"kubernetes.io/projected/9f7ba251-3cb7-43a5-9e0e-6de6285d56b0-kube-api-access-9xtbx\") pod \"redhat-marketplace-rjmfl\" (UID: \"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0\") " pod="openshift-marketplace/redhat-marketplace-rjmfl" Jan 31 04:55:39 crc kubenswrapper[4827]: I0131 04:55:39.323249 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f7ba251-3cb7-43a5-9e0e-6de6285d56b0-utilities\") pod \"redhat-marketplace-rjmfl\" (UID: \"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0\") " pod="openshift-marketplace/redhat-marketplace-rjmfl" Jan 31 04:55:39 crc kubenswrapper[4827]: I0131 04:55:39.323244 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f7ba251-3cb7-43a5-9e0e-6de6285d56b0-catalog-content\") pod \"redhat-marketplace-rjmfl\" (UID: \"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0\") " pod="openshift-marketplace/redhat-marketplace-rjmfl" Jan 31 04:55:39 crc kubenswrapper[4827]: I0131 04:55:39.345789 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xtbx\" (UniqueName: \"kubernetes.io/projected/9f7ba251-3cb7-43a5-9e0e-6de6285d56b0-kube-api-access-9xtbx\") pod \"redhat-marketplace-rjmfl\" (UID: \"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0\") " pod="openshift-marketplace/redhat-marketplace-rjmfl" Jan 31 04:55:39 crc kubenswrapper[4827]: I0131 04:55:39.526341 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjmfl" Jan 31 04:55:40 crc kubenswrapper[4827]: I0131 04:55:40.009583 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjmfl"] Jan 31 04:55:40 crc kubenswrapper[4827]: I0131 04:55:40.105131 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjmfl" event={"ID":"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0","Type":"ContainerStarted","Data":"07f312c41fd18c3f6b5fae43fc4eb3b37fe26eeaafe0f35e41ce0f573a21cdc7"} Jan 31 04:55:40 crc kubenswrapper[4827]: I0131 04:55:40.970161 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vwxgb" Jan 31 04:55:41 crc kubenswrapper[4827]: I0131 04:55:41.050021 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vwxgb" Jan 31 04:55:41 crc kubenswrapper[4827]: I0131 04:55:41.116265 4827 generic.go:334] "Generic (PLEG): container finished" podID="9f7ba251-3cb7-43a5-9e0e-6de6285d56b0" containerID="509cb513fefb9a0afdccb6d25b620ff3aac5ad9eb2237574cb8bdae7f99ea617" exitCode=0 Jan 31 04:55:41 crc kubenswrapper[4827]: I0131 04:55:41.116344 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjmfl" event={"ID":"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0","Type":"ContainerDied","Data":"509cb513fefb9a0afdccb6d25b620ff3aac5ad9eb2237574cb8bdae7f99ea617"} Jan 31 04:55:42 crc kubenswrapper[4827]: I0131 04:55:42.146741 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjmfl" event={"ID":"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0","Type":"ContainerStarted","Data":"0bad8bc3748ef3621b7221033634e22ebce473ab19addabf9fb742bae622df47"} Jan 31 04:55:43 crc kubenswrapper[4827]: I0131 04:55:43.166763 4827 generic.go:334] "Generic (PLEG): container finished" podID="9f7ba251-3cb7-43a5-9e0e-6de6285d56b0" containerID="0bad8bc3748ef3621b7221033634e22ebce473ab19addabf9fb742bae622df47" exitCode=0 Jan 31 04:55:43 crc kubenswrapper[4827]: I0131 04:55:43.166932 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjmfl" event={"ID":"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0","Type":"ContainerDied","Data":"0bad8bc3748ef3621b7221033634e22ebce473ab19addabf9fb742bae622df47"} Jan 31 04:55:43 crc kubenswrapper[4827]: I0131 04:55:43.348723 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vwxgb"] Jan 31 04:55:43 crc kubenswrapper[4827]: I0131 04:55:43.349020 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vwxgb" podUID="174ec074-ed7c-4989-9d17-6efc73f14843" containerName="registry-server" containerID="cri-o://7d250919f8ee71b15edb065748b694b05eea0fcc233742894f563d5be3c33209" gracePeriod=2 Jan 31 04:55:43 crc kubenswrapper[4827]: I0131 04:55:43.956866 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vwxgb" Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.017874 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/174ec074-ed7c-4989-9d17-6efc73f14843-utilities\") pod \"174ec074-ed7c-4989-9d17-6efc73f14843\" (UID: \"174ec074-ed7c-4989-9d17-6efc73f14843\") " Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.018132 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvtq9\" (UniqueName: \"kubernetes.io/projected/174ec074-ed7c-4989-9d17-6efc73f14843-kube-api-access-vvtq9\") pod \"174ec074-ed7c-4989-9d17-6efc73f14843\" (UID: \"174ec074-ed7c-4989-9d17-6efc73f14843\") " Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.018215 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/174ec074-ed7c-4989-9d17-6efc73f14843-catalog-content\") pod \"174ec074-ed7c-4989-9d17-6efc73f14843\" (UID: \"174ec074-ed7c-4989-9d17-6efc73f14843\") " Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.018931 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/174ec074-ed7c-4989-9d17-6efc73f14843-utilities" (OuterVolumeSpecName: "utilities") pod "174ec074-ed7c-4989-9d17-6efc73f14843" (UID: "174ec074-ed7c-4989-9d17-6efc73f14843"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.036205 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/174ec074-ed7c-4989-9d17-6efc73f14843-kube-api-access-vvtq9" (OuterVolumeSpecName: "kube-api-access-vvtq9") pod "174ec074-ed7c-4989-9d17-6efc73f14843" (UID: "174ec074-ed7c-4989-9d17-6efc73f14843"). InnerVolumeSpecName "kube-api-access-vvtq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.121175 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/174ec074-ed7c-4989-9d17-6efc73f14843-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.121198 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvtq9\" (UniqueName: \"kubernetes.io/projected/174ec074-ed7c-4989-9d17-6efc73f14843-kube-api-access-vvtq9\") on node \"crc\" DevicePath \"\"" Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.130623 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/174ec074-ed7c-4989-9d17-6efc73f14843-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "174ec074-ed7c-4989-9d17-6efc73f14843" (UID: "174ec074-ed7c-4989-9d17-6efc73f14843"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.178163 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjmfl" event={"ID":"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0","Type":"ContainerStarted","Data":"5450c4fc29d0fb6ef73feda82e69c8ba354801b5935cfa2263b35bd712639d41"} Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.185914 4827 generic.go:334] "Generic (PLEG): container finished" podID="174ec074-ed7c-4989-9d17-6efc73f14843" containerID="7d250919f8ee71b15edb065748b694b05eea0fcc233742894f563d5be3c33209" exitCode=0 Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.185987 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwxgb" event={"ID":"174ec074-ed7c-4989-9d17-6efc73f14843","Type":"ContainerDied","Data":"7d250919f8ee71b15edb065748b694b05eea0fcc233742894f563d5be3c33209"} Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.186017 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwxgb" event={"ID":"174ec074-ed7c-4989-9d17-6efc73f14843","Type":"ContainerDied","Data":"567bc969b03207d8a230a4e71cd97c78cf9fa047103d27a39804d3b78ec0a327"} Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.186035 4827 scope.go:117] "RemoveContainer" containerID="7d250919f8ee71b15edb065748b694b05eea0fcc233742894f563d5be3c33209" Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.186052 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vwxgb" Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.204592 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rjmfl" podStartSLOduration=2.6323600259999997 podStartE2EDuration="5.204577138s" podCreationTimestamp="2026-01-31 04:55:39 +0000 UTC" firstStartedPulling="2026-01-31 04:55:41.118975267 +0000 UTC m=+4133.806055726" lastFinishedPulling="2026-01-31 04:55:43.691192389 +0000 UTC m=+4136.378272838" observedRunningTime="2026-01-31 04:55:44.200239143 +0000 UTC m=+4136.887319612" watchObservedRunningTime="2026-01-31 04:55:44.204577138 +0000 UTC m=+4136.891657587" Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.223057 4827 scope.go:117] "RemoveContainer" containerID="da594a90beddb875b985b26f53726267dda7d76c2d959278211b8453ca41ba12" Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.224255 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/174ec074-ed7c-4989-9d17-6efc73f14843-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.239848 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vwxgb"] Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.247730 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vwxgb"] Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.256553 4827 scope.go:117] "RemoveContainer" containerID="140d312f9d48b0cd392e7d3d5a549755e26a99c3ebf783338fb43900321a298a" Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.282641 4827 scope.go:117] "RemoveContainer" containerID="7d250919f8ee71b15edb065748b694b05eea0fcc233742894f563d5be3c33209" Jan 31 04:55:44 crc kubenswrapper[4827]: E0131 04:55:44.292010 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d250919f8ee71b15edb065748b694b05eea0fcc233742894f563d5be3c33209\": container with ID starting with 7d250919f8ee71b15edb065748b694b05eea0fcc233742894f563d5be3c33209 not found: ID does not exist" containerID="7d250919f8ee71b15edb065748b694b05eea0fcc233742894f563d5be3c33209" Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.292049 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d250919f8ee71b15edb065748b694b05eea0fcc233742894f563d5be3c33209"} err="failed to get container status \"7d250919f8ee71b15edb065748b694b05eea0fcc233742894f563d5be3c33209\": rpc error: code = NotFound desc = could not find container \"7d250919f8ee71b15edb065748b694b05eea0fcc233742894f563d5be3c33209\": container with ID starting with 7d250919f8ee71b15edb065748b694b05eea0fcc233742894f563d5be3c33209 not found: ID does not exist" Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.292072 4827 scope.go:117] "RemoveContainer" containerID="da594a90beddb875b985b26f53726267dda7d76c2d959278211b8453ca41ba12" Jan 31 04:55:44 crc kubenswrapper[4827]: E0131 04:55:44.292495 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da594a90beddb875b985b26f53726267dda7d76c2d959278211b8453ca41ba12\": container with ID starting with da594a90beddb875b985b26f53726267dda7d76c2d959278211b8453ca41ba12 not found: ID does not exist" containerID="da594a90beddb875b985b26f53726267dda7d76c2d959278211b8453ca41ba12" Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.292542 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da594a90beddb875b985b26f53726267dda7d76c2d959278211b8453ca41ba12"} err="failed to get container status \"da594a90beddb875b985b26f53726267dda7d76c2d959278211b8453ca41ba12\": rpc error: code = NotFound desc = could not find container \"da594a90beddb875b985b26f53726267dda7d76c2d959278211b8453ca41ba12\": container with ID starting with da594a90beddb875b985b26f53726267dda7d76c2d959278211b8453ca41ba12 not found: ID does not exist" Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.292569 4827 scope.go:117] "RemoveContainer" containerID="140d312f9d48b0cd392e7d3d5a549755e26a99c3ebf783338fb43900321a298a" Jan 31 04:55:44 crc kubenswrapper[4827]: E0131 04:55:44.292985 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"140d312f9d48b0cd392e7d3d5a549755e26a99c3ebf783338fb43900321a298a\": container with ID starting with 140d312f9d48b0cd392e7d3d5a549755e26a99c3ebf783338fb43900321a298a not found: ID does not exist" containerID="140d312f9d48b0cd392e7d3d5a549755e26a99c3ebf783338fb43900321a298a" Jan 31 04:55:44 crc kubenswrapper[4827]: I0131 04:55:44.293008 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140d312f9d48b0cd392e7d3d5a549755e26a99c3ebf783338fb43900321a298a"} err="failed to get container status \"140d312f9d48b0cd392e7d3d5a549755e26a99c3ebf783338fb43900321a298a\": rpc error: code = NotFound desc = could not find container \"140d312f9d48b0cd392e7d3d5a549755e26a99c3ebf783338fb43900321a298a\": container with ID starting with 140d312f9d48b0cd392e7d3d5a549755e26a99c3ebf783338fb43900321a298a not found: ID does not exist" Jan 31 04:55:44 crc kubenswrapper[4827]: E0131 04:55:44.381578 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod174ec074_ed7c_4989_9d17_6efc73f14843.slice\": RecentStats: unable to find data in memory cache]" Jan 31 04:55:46 crc kubenswrapper[4827]: I0131 04:55:46.120675 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="174ec074-ed7c-4989-9d17-6efc73f14843" path="/var/lib/kubelet/pods/174ec074-ed7c-4989-9d17-6efc73f14843/volumes" Jan 31 04:55:47 crc kubenswrapper[4827]: I0131 04:55:47.371639 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:55:47 crc kubenswrapper[4827]: I0131 04:55:47.372053 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:55:47 crc kubenswrapper[4827]: I0131 04:55:47.372113 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 04:55:47 crc kubenswrapper[4827]: I0131 04:55:47.372925 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d56b695f7df6286e4426187ed83c5f72a0a776a98361c8e832a46f510a0923e7"} pod="openshift-machine-config-operator/machine-config-daemon-jxh94" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:55:47 crc kubenswrapper[4827]: I0131 04:55:47.372980 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" containerID="cri-o://d56b695f7df6286e4426187ed83c5f72a0a776a98361c8e832a46f510a0923e7" gracePeriod=600 Jan 31 04:55:48 crc kubenswrapper[4827]: I0131 04:55:48.221916 4827 generic.go:334] "Generic (PLEG): container finished" podID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerID="d56b695f7df6286e4426187ed83c5f72a0a776a98361c8e832a46f510a0923e7" exitCode=0 Jan 31 04:55:48 crc kubenswrapper[4827]: I0131 04:55:48.221969 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerDied","Data":"d56b695f7df6286e4426187ed83c5f72a0a776a98361c8e832a46f510a0923e7"} Jan 31 04:55:48 crc kubenswrapper[4827]: I0131 04:55:48.222495 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48"} Jan 31 04:55:48 crc kubenswrapper[4827]: I0131 04:55:48.222521 4827 scope.go:117] "RemoveContainer" containerID="674d923fc7530a2623e23c68a51611305fa42248d203b4bf9ab96e38b4379909" Jan 31 04:55:49 crc kubenswrapper[4827]: I0131 04:55:49.527182 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rjmfl" Jan 31 04:55:49 crc kubenswrapper[4827]: I0131 04:55:49.527857 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rjmfl" Jan 31 04:55:49 crc kubenswrapper[4827]: I0131 04:55:49.577847 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rjmfl" Jan 31 04:55:50 crc kubenswrapper[4827]: I0131 04:55:50.315748 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rjmfl" Jan 31 04:55:50 crc kubenswrapper[4827]: I0131 04:55:50.819737 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjmfl"] Jan 31 04:55:52 crc kubenswrapper[4827]: I0131 04:55:52.264508 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rjmfl" podUID="9f7ba251-3cb7-43a5-9e0e-6de6285d56b0" containerName="registry-server" containerID="cri-o://5450c4fc29d0fb6ef73feda82e69c8ba354801b5935cfa2263b35bd712639d41" gracePeriod=2 Jan 31 04:55:52 crc kubenswrapper[4827]: I0131 04:55:52.873581 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjmfl" Jan 31 04:55:52 crc kubenswrapper[4827]: I0131 04:55:52.995211 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f7ba251-3cb7-43a5-9e0e-6de6285d56b0-utilities\") pod \"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0\" (UID: \"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0\") " Jan 31 04:55:52 crc kubenswrapper[4827]: I0131 04:55:52.995561 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f7ba251-3cb7-43a5-9e0e-6de6285d56b0-catalog-content\") pod \"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0\" (UID: \"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0\") " Jan 31 04:55:52 crc kubenswrapper[4827]: I0131 04:55:52.995819 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xtbx\" (UniqueName: \"kubernetes.io/projected/9f7ba251-3cb7-43a5-9e0e-6de6285d56b0-kube-api-access-9xtbx\") pod \"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0\" (UID: \"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0\") " Jan 31 04:55:52 crc kubenswrapper[4827]: I0131 04:55:52.996154 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f7ba251-3cb7-43a5-9e0e-6de6285d56b0-utilities" (OuterVolumeSpecName: "utilities") pod "9f7ba251-3cb7-43a5-9e0e-6de6285d56b0" (UID: "9f7ba251-3cb7-43a5-9e0e-6de6285d56b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:55:52 crc kubenswrapper[4827]: I0131 04:55:52.996595 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f7ba251-3cb7-43a5-9e0e-6de6285d56b0-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:55:53 crc kubenswrapper[4827]: I0131 04:55:53.002437 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f7ba251-3cb7-43a5-9e0e-6de6285d56b0-kube-api-access-9xtbx" (OuterVolumeSpecName: "kube-api-access-9xtbx") pod "9f7ba251-3cb7-43a5-9e0e-6de6285d56b0" (UID: "9f7ba251-3cb7-43a5-9e0e-6de6285d56b0"). InnerVolumeSpecName "kube-api-access-9xtbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:55:53 crc kubenswrapper[4827]: I0131 04:55:53.023123 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f7ba251-3cb7-43a5-9e0e-6de6285d56b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f7ba251-3cb7-43a5-9e0e-6de6285d56b0" (UID: "9f7ba251-3cb7-43a5-9e0e-6de6285d56b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:55:53 crc kubenswrapper[4827]: I0131 04:55:53.098868 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f7ba251-3cb7-43a5-9e0e-6de6285d56b0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:55:53 crc kubenswrapper[4827]: I0131 04:55:53.098921 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xtbx\" (UniqueName: \"kubernetes.io/projected/9f7ba251-3cb7-43a5-9e0e-6de6285d56b0-kube-api-access-9xtbx\") on node \"crc\" DevicePath \"\"" Jan 31 04:55:53 crc kubenswrapper[4827]: I0131 04:55:53.275038 4827 generic.go:334] "Generic (PLEG): container finished" podID="9f7ba251-3cb7-43a5-9e0e-6de6285d56b0" containerID="5450c4fc29d0fb6ef73feda82e69c8ba354801b5935cfa2263b35bd712639d41" exitCode=0 Jan 31 04:55:53 crc kubenswrapper[4827]: I0131 04:55:53.275083 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjmfl" event={"ID":"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0","Type":"ContainerDied","Data":"5450c4fc29d0fb6ef73feda82e69c8ba354801b5935cfa2263b35bd712639d41"} Jan 31 04:55:53 crc kubenswrapper[4827]: I0131 04:55:53.275114 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjmfl" event={"ID":"9f7ba251-3cb7-43a5-9e0e-6de6285d56b0","Type":"ContainerDied","Data":"07f312c41fd18c3f6b5fae43fc4eb3b37fe26eeaafe0f35e41ce0f573a21cdc7"} Jan 31 04:55:53 crc kubenswrapper[4827]: I0131 04:55:53.275133 4827 scope.go:117] "RemoveContainer" containerID="5450c4fc29d0fb6ef73feda82e69c8ba354801b5935cfa2263b35bd712639d41" Jan 31 04:55:53 crc kubenswrapper[4827]: I0131 04:55:53.275275 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjmfl" Jan 31 04:55:53 crc kubenswrapper[4827]: I0131 04:55:53.312675 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjmfl"] Jan 31 04:55:53 crc kubenswrapper[4827]: I0131 04:55:53.323210 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjmfl"] Jan 31 04:55:53 crc kubenswrapper[4827]: I0131 04:55:53.326121 4827 scope.go:117] "RemoveContainer" containerID="0bad8bc3748ef3621b7221033634e22ebce473ab19addabf9fb742bae622df47" Jan 31 04:55:53 crc kubenswrapper[4827]: I0131 04:55:53.356746 4827 scope.go:117] "RemoveContainer" containerID="509cb513fefb9a0afdccb6d25b620ff3aac5ad9eb2237574cb8bdae7f99ea617" Jan 31 04:55:53 crc kubenswrapper[4827]: I0131 04:55:53.406798 4827 scope.go:117] "RemoveContainer" containerID="5450c4fc29d0fb6ef73feda82e69c8ba354801b5935cfa2263b35bd712639d41" Jan 31 04:55:53 crc kubenswrapper[4827]: E0131 04:55:53.407304 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5450c4fc29d0fb6ef73feda82e69c8ba354801b5935cfa2263b35bd712639d41\": container with ID starting with 5450c4fc29d0fb6ef73feda82e69c8ba354801b5935cfa2263b35bd712639d41 not found: ID does not exist" containerID="5450c4fc29d0fb6ef73feda82e69c8ba354801b5935cfa2263b35bd712639d41" Jan 31 04:55:53 crc kubenswrapper[4827]: I0131 04:55:53.407349 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5450c4fc29d0fb6ef73feda82e69c8ba354801b5935cfa2263b35bd712639d41"} err="failed to get container status \"5450c4fc29d0fb6ef73feda82e69c8ba354801b5935cfa2263b35bd712639d41\": rpc error: code = NotFound desc = could not find container \"5450c4fc29d0fb6ef73feda82e69c8ba354801b5935cfa2263b35bd712639d41\": container with ID starting with 5450c4fc29d0fb6ef73feda82e69c8ba354801b5935cfa2263b35bd712639d41 not found: ID does not exist" Jan 31 04:55:53 crc kubenswrapper[4827]: I0131 04:55:53.407375 4827 scope.go:117] "RemoveContainer" containerID="0bad8bc3748ef3621b7221033634e22ebce473ab19addabf9fb742bae622df47" Jan 31 04:55:53 crc kubenswrapper[4827]: E0131 04:55:53.407705 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bad8bc3748ef3621b7221033634e22ebce473ab19addabf9fb742bae622df47\": container with ID starting with 0bad8bc3748ef3621b7221033634e22ebce473ab19addabf9fb742bae622df47 not found: ID does not exist" containerID="0bad8bc3748ef3621b7221033634e22ebce473ab19addabf9fb742bae622df47" Jan 31 04:55:53 crc kubenswrapper[4827]: I0131 04:55:53.407729 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bad8bc3748ef3621b7221033634e22ebce473ab19addabf9fb742bae622df47"} err="failed to get container status \"0bad8bc3748ef3621b7221033634e22ebce473ab19addabf9fb742bae622df47\": rpc error: code = NotFound desc = could not find container \"0bad8bc3748ef3621b7221033634e22ebce473ab19addabf9fb742bae622df47\": container with ID starting with 0bad8bc3748ef3621b7221033634e22ebce473ab19addabf9fb742bae622df47 not found: ID does not exist" Jan 31 04:55:53 crc kubenswrapper[4827]: I0131 04:55:53.407744 4827 scope.go:117] "RemoveContainer" containerID="509cb513fefb9a0afdccb6d25b620ff3aac5ad9eb2237574cb8bdae7f99ea617" Jan 31 04:55:53 crc kubenswrapper[4827]: E0131 04:55:53.408054 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"509cb513fefb9a0afdccb6d25b620ff3aac5ad9eb2237574cb8bdae7f99ea617\": container with ID starting with 509cb513fefb9a0afdccb6d25b620ff3aac5ad9eb2237574cb8bdae7f99ea617 not found: ID does not exist" containerID="509cb513fefb9a0afdccb6d25b620ff3aac5ad9eb2237574cb8bdae7f99ea617" Jan 31 04:55:53 crc kubenswrapper[4827]: I0131 04:55:53.408085 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"509cb513fefb9a0afdccb6d25b620ff3aac5ad9eb2237574cb8bdae7f99ea617"} err="failed to get container status \"509cb513fefb9a0afdccb6d25b620ff3aac5ad9eb2237574cb8bdae7f99ea617\": rpc error: code = NotFound desc = could not find container \"509cb513fefb9a0afdccb6d25b620ff3aac5ad9eb2237574cb8bdae7f99ea617\": container with ID starting with 509cb513fefb9a0afdccb6d25b620ff3aac5ad9eb2237574cb8bdae7f99ea617 not found: ID does not exist" Jan 31 04:55:54 crc kubenswrapper[4827]: I0131 04:55:54.121981 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f7ba251-3cb7-43a5-9e0e-6de6285d56b0" path="/var/lib/kubelet/pods/9f7ba251-3cb7-43a5-9e0e-6de6285d56b0/volumes" Jan 31 04:57:47 crc kubenswrapper[4827]: I0131 04:57:47.371626 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:57:47 crc kubenswrapper[4827]: I0131 04:57:47.374586 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:58:17 crc kubenswrapper[4827]: I0131 04:58:17.371522 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:58:17 crc kubenswrapper[4827]: I0131 04:58:17.372187 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:58:47 crc kubenswrapper[4827]: I0131 04:58:47.371198 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:58:47 crc kubenswrapper[4827]: I0131 04:58:47.372015 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:58:47 crc kubenswrapper[4827]: I0131 04:58:47.372084 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 04:58:47 crc kubenswrapper[4827]: I0131 04:58:47.373171 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48"} pod="openshift-machine-config-operator/machine-config-daemon-jxh94" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:58:47 crc kubenswrapper[4827]: I0131 04:58:47.373270 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" containerID="cri-o://b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" gracePeriod=600 Jan 31 04:58:47 crc kubenswrapper[4827]: E0131 04:58:47.503057 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:58:47 crc kubenswrapper[4827]: I0131 04:58:47.924909 4827 generic.go:334] "Generic (PLEG): container finished" podID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" exitCode=0 Jan 31 04:58:47 crc kubenswrapper[4827]: I0131 04:58:47.924920 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerDied","Data":"b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48"} Jan 31 04:58:47 crc kubenswrapper[4827]: I0131 04:58:47.925089 4827 scope.go:117] "RemoveContainer" containerID="d56b695f7df6286e4426187ed83c5f72a0a776a98361c8e832a46f510a0923e7" Jan 31 04:58:47 crc kubenswrapper[4827]: I0131 04:58:47.925965 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 04:58:47 crc kubenswrapper[4827]: E0131 04:58:47.926588 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:58:59 crc kubenswrapper[4827]: I0131 04:58:59.110471 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 04:58:59 crc kubenswrapper[4827]: E0131 04:58:59.112260 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:59:13 crc kubenswrapper[4827]: I0131 04:59:13.439772 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5459h"] Jan 31 04:59:13 crc kubenswrapper[4827]: E0131 04:59:13.440866 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174ec074-ed7c-4989-9d17-6efc73f14843" containerName="registry-server" Jan 31 04:59:13 crc kubenswrapper[4827]: I0131 04:59:13.440903 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="174ec074-ed7c-4989-9d17-6efc73f14843" containerName="registry-server" Jan 31 04:59:13 crc kubenswrapper[4827]: E0131 04:59:13.440920 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7ba251-3cb7-43a5-9e0e-6de6285d56b0" containerName="extract-content" Jan 31 04:59:13 crc kubenswrapper[4827]: I0131 04:59:13.440927 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7ba251-3cb7-43a5-9e0e-6de6285d56b0" containerName="extract-content" Jan 31 04:59:13 crc kubenswrapper[4827]: E0131 04:59:13.440939 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174ec074-ed7c-4989-9d17-6efc73f14843" containerName="extract-content" Jan 31 04:59:13 crc kubenswrapper[4827]: I0131 04:59:13.440948 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="174ec074-ed7c-4989-9d17-6efc73f14843" containerName="extract-content" Jan 31 04:59:13 crc kubenswrapper[4827]: E0131 04:59:13.440968 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7ba251-3cb7-43a5-9e0e-6de6285d56b0" containerName="registry-server" Jan 31 04:59:13 crc kubenswrapper[4827]: I0131 04:59:13.440975 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7ba251-3cb7-43a5-9e0e-6de6285d56b0" containerName="registry-server" Jan 31 04:59:13 crc kubenswrapper[4827]: E0131 04:59:13.440989 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7ba251-3cb7-43a5-9e0e-6de6285d56b0" containerName="extract-utilities" Jan 31 04:59:13 crc kubenswrapper[4827]: I0131 04:59:13.440996 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7ba251-3cb7-43a5-9e0e-6de6285d56b0" containerName="extract-utilities" Jan 31 04:59:13 crc kubenswrapper[4827]: E0131 04:59:13.441016 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174ec074-ed7c-4989-9d17-6efc73f14843" containerName="extract-utilities" Jan 31 04:59:13 crc kubenswrapper[4827]: I0131 04:59:13.441023 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="174ec074-ed7c-4989-9d17-6efc73f14843" containerName="extract-utilities" Jan 31 04:59:13 crc kubenswrapper[4827]: I0131 04:59:13.441277 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="174ec074-ed7c-4989-9d17-6efc73f14843" containerName="registry-server" Jan 31 04:59:13 crc kubenswrapper[4827]: I0131 04:59:13.441303 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f7ba251-3cb7-43a5-9e0e-6de6285d56b0" containerName="registry-server" Jan 31 04:59:13 crc kubenswrapper[4827]: I0131 04:59:13.442863 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5459h" Jan 31 04:59:13 crc kubenswrapper[4827]: I0131 04:59:13.453692 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5459h"] Jan 31 04:59:13 crc kubenswrapper[4827]: I0131 04:59:13.495726 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36e287ac-0c49-4189-8f7e-7f7d9a50175c-utilities\") pod \"certified-operators-5459h\" (UID: \"36e287ac-0c49-4189-8f7e-7f7d9a50175c\") " pod="openshift-marketplace/certified-operators-5459h" Jan 31 04:59:13 crc kubenswrapper[4827]: I0131 04:59:13.495889 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgqdp\" (UniqueName: \"kubernetes.io/projected/36e287ac-0c49-4189-8f7e-7f7d9a50175c-kube-api-access-kgqdp\") pod \"certified-operators-5459h\" (UID: \"36e287ac-0c49-4189-8f7e-7f7d9a50175c\") " pod="openshift-marketplace/certified-operators-5459h" Jan 31 04:59:13 crc kubenswrapper[4827]: I0131 04:59:13.495924 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36e287ac-0c49-4189-8f7e-7f7d9a50175c-catalog-content\") pod \"certified-operators-5459h\" (UID: \"36e287ac-0c49-4189-8f7e-7f7d9a50175c\") " pod="openshift-marketplace/certified-operators-5459h" Jan 31 04:59:13 crc kubenswrapper[4827]: I0131 04:59:13.597796 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgqdp\" (UniqueName: \"kubernetes.io/projected/36e287ac-0c49-4189-8f7e-7f7d9a50175c-kube-api-access-kgqdp\") pod \"certified-operators-5459h\" (UID: \"36e287ac-0c49-4189-8f7e-7f7d9a50175c\") " pod="openshift-marketplace/certified-operators-5459h" Jan 31 04:59:13 crc kubenswrapper[4827]: I0131 04:59:13.597845 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36e287ac-0c49-4189-8f7e-7f7d9a50175c-catalog-content\") pod \"certified-operators-5459h\" (UID: \"36e287ac-0c49-4189-8f7e-7f7d9a50175c\") " pod="openshift-marketplace/certified-operators-5459h" Jan 31 04:59:13 crc kubenswrapper[4827]: I0131 04:59:13.597983 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36e287ac-0c49-4189-8f7e-7f7d9a50175c-utilities\") pod \"certified-operators-5459h\" (UID: \"36e287ac-0c49-4189-8f7e-7f7d9a50175c\") " pod="openshift-marketplace/certified-operators-5459h" Jan 31 04:59:13 crc kubenswrapper[4827]: I0131 04:59:13.598415 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36e287ac-0c49-4189-8f7e-7f7d9a50175c-catalog-content\") pod \"certified-operators-5459h\" (UID: \"36e287ac-0c49-4189-8f7e-7f7d9a50175c\") " pod="openshift-marketplace/certified-operators-5459h" Jan 31 04:59:13 crc kubenswrapper[4827]: I0131 04:59:13.598423 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36e287ac-0c49-4189-8f7e-7f7d9a50175c-utilities\") pod \"certified-operators-5459h\" (UID: \"36e287ac-0c49-4189-8f7e-7f7d9a50175c\") " pod="openshift-marketplace/certified-operators-5459h" Jan 31 04:59:13 crc kubenswrapper[4827]: I0131 04:59:13.615686 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgqdp\" (UniqueName: \"kubernetes.io/projected/36e287ac-0c49-4189-8f7e-7f7d9a50175c-kube-api-access-kgqdp\") pod \"certified-operators-5459h\" (UID: \"36e287ac-0c49-4189-8f7e-7f7d9a50175c\") " pod="openshift-marketplace/certified-operators-5459h" Jan 31 04:59:13 crc kubenswrapper[4827]: I0131 04:59:13.797505 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5459h" Jan 31 04:59:14 crc kubenswrapper[4827]: I0131 04:59:14.115531 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 04:59:14 crc kubenswrapper[4827]: E0131 04:59:14.120612 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:59:14 crc kubenswrapper[4827]: I0131 04:59:14.325515 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5459h"] Jan 31 04:59:15 crc kubenswrapper[4827]: I0131 04:59:15.190399 4827 generic.go:334] "Generic (PLEG): container finished" podID="36e287ac-0c49-4189-8f7e-7f7d9a50175c" containerID="51df938df3564e5b72020eb44c40934e00f8e44cb08fabb2b7cef40494469d3d" exitCode=0 Jan 31 04:59:15 crc kubenswrapper[4827]: I0131 04:59:15.190647 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5459h" event={"ID":"36e287ac-0c49-4189-8f7e-7f7d9a50175c","Type":"ContainerDied","Data":"51df938df3564e5b72020eb44c40934e00f8e44cb08fabb2b7cef40494469d3d"} Jan 31 04:59:15 crc kubenswrapper[4827]: I0131 04:59:15.190746 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5459h" event={"ID":"36e287ac-0c49-4189-8f7e-7f7d9a50175c","Type":"ContainerStarted","Data":"142f653b44eb00a8b8e445502af20afa9131ff736b9ac5a954ecd45b45b7aafd"} Jan 31 04:59:16 crc kubenswrapper[4827]: I0131 04:59:16.200734 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5459h" event={"ID":"36e287ac-0c49-4189-8f7e-7f7d9a50175c","Type":"ContainerStarted","Data":"34bcde2cfedf10db4c724c3adb31d8e17d6a518c845546b76290d66b610bbca6"} Jan 31 04:59:18 crc kubenswrapper[4827]: I0131 04:59:18.219546 4827 generic.go:334] "Generic (PLEG): container finished" podID="36e287ac-0c49-4189-8f7e-7f7d9a50175c" containerID="34bcde2cfedf10db4c724c3adb31d8e17d6a518c845546b76290d66b610bbca6" exitCode=0 Jan 31 04:59:18 crc kubenswrapper[4827]: I0131 04:59:18.219612 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5459h" event={"ID":"36e287ac-0c49-4189-8f7e-7f7d9a50175c","Type":"ContainerDied","Data":"34bcde2cfedf10db4c724c3adb31d8e17d6a518c845546b76290d66b610bbca6"} Jan 31 04:59:20 crc kubenswrapper[4827]: I0131 04:59:20.238770 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5459h" event={"ID":"36e287ac-0c49-4189-8f7e-7f7d9a50175c","Type":"ContainerStarted","Data":"03cd1eca3d75d88910440d724527ba6bffa60b8689c89f66a15c7b2cc392cd6b"} Jan 31 04:59:20 crc kubenswrapper[4827]: I0131 04:59:20.272372 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5459h" podStartSLOduration=3.176140981 podStartE2EDuration="7.272352609s" podCreationTimestamp="2026-01-31 04:59:13 +0000 UTC" firstStartedPulling="2026-01-31 04:59:15.192108107 +0000 UTC m=+4347.879188556" lastFinishedPulling="2026-01-31 04:59:19.288319735 +0000 UTC m=+4351.975400184" observedRunningTime="2026-01-31 04:59:20.263081748 +0000 UTC m=+4352.950162217" watchObservedRunningTime="2026-01-31 04:59:20.272352609 +0000 UTC m=+4352.959433058" Jan 31 04:59:23 crc kubenswrapper[4827]: I0131 04:59:23.798653 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5459h" Jan 31 04:59:23 crc kubenswrapper[4827]: I0131 04:59:23.799368 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5459h" Jan 31 04:59:23 crc kubenswrapper[4827]: I0131 04:59:23.886960 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5459h" Jan 31 04:59:24 crc kubenswrapper[4827]: I0131 04:59:24.351984 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5459h" Jan 31 04:59:24 crc kubenswrapper[4827]: I0131 04:59:24.422254 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5459h"] Jan 31 04:59:25 crc kubenswrapper[4827]: I0131 04:59:25.110040 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 04:59:25 crc kubenswrapper[4827]: E0131 04:59:25.110590 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:59:26 crc kubenswrapper[4827]: I0131 04:59:26.289991 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5459h" podUID="36e287ac-0c49-4189-8f7e-7f7d9a50175c" containerName="registry-server" containerID="cri-o://03cd1eca3d75d88910440d724527ba6bffa60b8689c89f66a15c7b2cc392cd6b" gracePeriod=2 Jan 31 04:59:26 crc kubenswrapper[4827]: I0131 04:59:26.917318 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5459h" Jan 31 04:59:26 crc kubenswrapper[4827]: I0131 04:59:26.963128 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36e287ac-0c49-4189-8f7e-7f7d9a50175c-utilities\") pod \"36e287ac-0c49-4189-8f7e-7f7d9a50175c\" (UID: \"36e287ac-0c49-4189-8f7e-7f7d9a50175c\") " Jan 31 04:59:26 crc kubenswrapper[4827]: I0131 04:59:26.963201 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgqdp\" (UniqueName: \"kubernetes.io/projected/36e287ac-0c49-4189-8f7e-7f7d9a50175c-kube-api-access-kgqdp\") pod \"36e287ac-0c49-4189-8f7e-7f7d9a50175c\" (UID: \"36e287ac-0c49-4189-8f7e-7f7d9a50175c\") " Jan 31 04:59:26 crc kubenswrapper[4827]: I0131 04:59:26.963229 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36e287ac-0c49-4189-8f7e-7f7d9a50175c-catalog-content\") pod \"36e287ac-0c49-4189-8f7e-7f7d9a50175c\" (UID: \"36e287ac-0c49-4189-8f7e-7f7d9a50175c\") " Jan 31 04:59:26 crc kubenswrapper[4827]: I0131 04:59:26.964078 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36e287ac-0c49-4189-8f7e-7f7d9a50175c-utilities" (OuterVolumeSpecName: "utilities") pod "36e287ac-0c49-4189-8f7e-7f7d9a50175c" (UID: "36e287ac-0c49-4189-8f7e-7f7d9a50175c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:59:26 crc kubenswrapper[4827]: I0131 04:59:26.972076 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36e287ac-0c49-4189-8f7e-7f7d9a50175c-kube-api-access-kgqdp" (OuterVolumeSpecName: "kube-api-access-kgqdp") pod "36e287ac-0c49-4189-8f7e-7f7d9a50175c" (UID: "36e287ac-0c49-4189-8f7e-7f7d9a50175c"). InnerVolumeSpecName "kube-api-access-kgqdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:59:27 crc kubenswrapper[4827]: I0131 04:59:27.065194 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36e287ac-0c49-4189-8f7e-7f7d9a50175c-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:59:27 crc kubenswrapper[4827]: I0131 04:59:27.065226 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgqdp\" (UniqueName: \"kubernetes.io/projected/36e287ac-0c49-4189-8f7e-7f7d9a50175c-kube-api-access-kgqdp\") on node \"crc\" DevicePath \"\"" Jan 31 04:59:27 crc kubenswrapper[4827]: I0131 04:59:27.304849 4827 generic.go:334] "Generic (PLEG): container finished" podID="36e287ac-0c49-4189-8f7e-7f7d9a50175c" containerID="03cd1eca3d75d88910440d724527ba6bffa60b8689c89f66a15c7b2cc392cd6b" exitCode=0 Jan 31 04:59:27 crc kubenswrapper[4827]: I0131 04:59:27.304919 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5459h" event={"ID":"36e287ac-0c49-4189-8f7e-7f7d9a50175c","Type":"ContainerDied","Data":"03cd1eca3d75d88910440d724527ba6bffa60b8689c89f66a15c7b2cc392cd6b"} Jan 31 04:59:27 crc kubenswrapper[4827]: I0131 04:59:27.304952 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5459h" event={"ID":"36e287ac-0c49-4189-8f7e-7f7d9a50175c","Type":"ContainerDied","Data":"142f653b44eb00a8b8e445502af20afa9131ff736b9ac5a954ecd45b45b7aafd"} Jan 31 04:59:27 crc kubenswrapper[4827]: I0131 04:59:27.304976 4827 scope.go:117] "RemoveContainer" containerID="03cd1eca3d75d88910440d724527ba6bffa60b8689c89f66a15c7b2cc392cd6b" Jan 31 04:59:27 crc kubenswrapper[4827]: I0131 04:59:27.305032 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5459h" Jan 31 04:59:27 crc kubenswrapper[4827]: I0131 04:59:27.333553 4827 scope.go:117] "RemoveContainer" containerID="34bcde2cfedf10db4c724c3adb31d8e17d6a518c845546b76290d66b610bbca6" Jan 31 04:59:27 crc kubenswrapper[4827]: I0131 04:59:27.372471 4827 scope.go:117] "RemoveContainer" containerID="51df938df3564e5b72020eb44c40934e00f8e44cb08fabb2b7cef40494469d3d" Jan 31 04:59:27 crc kubenswrapper[4827]: I0131 04:59:27.413217 4827 scope.go:117] "RemoveContainer" containerID="03cd1eca3d75d88910440d724527ba6bffa60b8689c89f66a15c7b2cc392cd6b" Jan 31 04:59:27 crc kubenswrapper[4827]: E0131 04:59:27.413763 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03cd1eca3d75d88910440d724527ba6bffa60b8689c89f66a15c7b2cc392cd6b\": container with ID starting with 03cd1eca3d75d88910440d724527ba6bffa60b8689c89f66a15c7b2cc392cd6b not found: ID does not exist" containerID="03cd1eca3d75d88910440d724527ba6bffa60b8689c89f66a15c7b2cc392cd6b" Jan 31 04:59:27 crc kubenswrapper[4827]: I0131 04:59:27.413810 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03cd1eca3d75d88910440d724527ba6bffa60b8689c89f66a15c7b2cc392cd6b"} err="failed to get container status \"03cd1eca3d75d88910440d724527ba6bffa60b8689c89f66a15c7b2cc392cd6b\": rpc error: code = NotFound desc = could not find container \"03cd1eca3d75d88910440d724527ba6bffa60b8689c89f66a15c7b2cc392cd6b\": container with ID starting with 03cd1eca3d75d88910440d724527ba6bffa60b8689c89f66a15c7b2cc392cd6b not found: ID does not exist" Jan 31 04:59:27 crc kubenswrapper[4827]: I0131 04:59:27.413835 4827 scope.go:117] "RemoveContainer" containerID="34bcde2cfedf10db4c724c3adb31d8e17d6a518c845546b76290d66b610bbca6" Jan 31 04:59:27 crc kubenswrapper[4827]: E0131 04:59:27.414208 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34bcde2cfedf10db4c724c3adb31d8e17d6a518c845546b76290d66b610bbca6\": container with ID starting with 34bcde2cfedf10db4c724c3adb31d8e17d6a518c845546b76290d66b610bbca6 not found: ID does not exist" containerID="34bcde2cfedf10db4c724c3adb31d8e17d6a518c845546b76290d66b610bbca6" Jan 31 04:59:27 crc kubenswrapper[4827]: I0131 04:59:27.414243 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34bcde2cfedf10db4c724c3adb31d8e17d6a518c845546b76290d66b610bbca6"} err="failed to get container status \"34bcde2cfedf10db4c724c3adb31d8e17d6a518c845546b76290d66b610bbca6\": rpc error: code = NotFound desc = could not find container \"34bcde2cfedf10db4c724c3adb31d8e17d6a518c845546b76290d66b610bbca6\": container with ID starting with 34bcde2cfedf10db4c724c3adb31d8e17d6a518c845546b76290d66b610bbca6 not found: ID does not exist" Jan 31 04:59:27 crc kubenswrapper[4827]: I0131 04:59:27.414262 4827 scope.go:117] "RemoveContainer" containerID="51df938df3564e5b72020eb44c40934e00f8e44cb08fabb2b7cef40494469d3d" Jan 31 04:59:27 crc kubenswrapper[4827]: E0131 04:59:27.414583 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51df938df3564e5b72020eb44c40934e00f8e44cb08fabb2b7cef40494469d3d\": container with ID starting with 51df938df3564e5b72020eb44c40934e00f8e44cb08fabb2b7cef40494469d3d not found: ID does not exist" containerID="51df938df3564e5b72020eb44c40934e00f8e44cb08fabb2b7cef40494469d3d" Jan 31 04:59:27 crc kubenswrapper[4827]: I0131 04:59:27.414644 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51df938df3564e5b72020eb44c40934e00f8e44cb08fabb2b7cef40494469d3d"} err="failed to get container status \"51df938df3564e5b72020eb44c40934e00f8e44cb08fabb2b7cef40494469d3d\": rpc error: code = NotFound desc = could not find container \"51df938df3564e5b72020eb44c40934e00f8e44cb08fabb2b7cef40494469d3d\": container with ID starting with 51df938df3564e5b72020eb44c40934e00f8e44cb08fabb2b7cef40494469d3d not found: ID does not exist" Jan 31 04:59:27 crc kubenswrapper[4827]: I0131 04:59:27.626130 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36e287ac-0c49-4189-8f7e-7f7d9a50175c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36e287ac-0c49-4189-8f7e-7f7d9a50175c" (UID: "36e287ac-0c49-4189-8f7e-7f7d9a50175c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:59:27 crc kubenswrapper[4827]: I0131 04:59:27.684848 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36e287ac-0c49-4189-8f7e-7f7d9a50175c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:59:27 crc kubenswrapper[4827]: I0131 04:59:27.980015 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5459h"] Jan 31 04:59:27 crc kubenswrapper[4827]: I0131 04:59:27.991818 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5459h"] Jan 31 04:59:28 crc kubenswrapper[4827]: I0131 04:59:28.131444 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36e287ac-0c49-4189-8f7e-7f7d9a50175c" path="/var/lib/kubelet/pods/36e287ac-0c49-4189-8f7e-7f7d9a50175c/volumes" Jan 31 04:59:36 crc kubenswrapper[4827]: I0131 04:59:36.110760 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 04:59:36 crc kubenswrapper[4827]: E0131 04:59:36.111511 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 04:59:49 crc kubenswrapper[4827]: I0131 04:59:49.109941 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 04:59:49 crc kubenswrapper[4827]: E0131 04:59:49.110915 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:00:00 crc kubenswrapper[4827]: I0131 05:00:00.196199 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497260-ssxmf"] Jan 31 05:00:00 crc kubenswrapper[4827]: E0131 05:00:00.197259 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36e287ac-0c49-4189-8f7e-7f7d9a50175c" containerName="extract-content" Jan 31 05:00:00 crc kubenswrapper[4827]: I0131 05:00:00.197280 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="36e287ac-0c49-4189-8f7e-7f7d9a50175c" containerName="extract-content" Jan 31 05:00:00 crc kubenswrapper[4827]: E0131 05:00:00.197317 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36e287ac-0c49-4189-8f7e-7f7d9a50175c" containerName="registry-server" Jan 31 05:00:00 crc kubenswrapper[4827]: I0131 05:00:00.197326 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="36e287ac-0c49-4189-8f7e-7f7d9a50175c" containerName="registry-server" Jan 31 05:00:00 crc kubenswrapper[4827]: E0131 05:00:00.197339 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36e287ac-0c49-4189-8f7e-7f7d9a50175c" containerName="extract-utilities" Jan 31 05:00:00 crc kubenswrapper[4827]: I0131 05:00:00.197346 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="36e287ac-0c49-4189-8f7e-7f7d9a50175c" containerName="extract-utilities" Jan 31 05:00:00 crc kubenswrapper[4827]: I0131 05:00:00.197565 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="36e287ac-0c49-4189-8f7e-7f7d9a50175c" containerName="registry-server" Jan 31 05:00:00 crc kubenswrapper[4827]: I0131 05:00:00.198421 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-ssxmf" Jan 31 05:00:00 crc kubenswrapper[4827]: I0131 05:00:00.200347 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 05:00:00 crc kubenswrapper[4827]: I0131 05:00:00.201166 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 05:00:00 crc kubenswrapper[4827]: I0131 05:00:00.209842 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497260-ssxmf"] Jan 31 05:00:00 crc kubenswrapper[4827]: I0131 05:00:00.250452 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32a213de-d460-4760-bcb7-afdf18c0cd8c-secret-volume\") pod \"collect-profiles-29497260-ssxmf\" (UID: \"32a213de-d460-4760-bcb7-afdf18c0cd8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-ssxmf" Jan 31 05:00:00 crc kubenswrapper[4827]: I0131 05:00:00.250803 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32a213de-d460-4760-bcb7-afdf18c0cd8c-config-volume\") pod \"collect-profiles-29497260-ssxmf\" (UID: \"32a213de-d460-4760-bcb7-afdf18c0cd8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-ssxmf" Jan 31 05:00:00 crc kubenswrapper[4827]: I0131 05:00:00.250855 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn7mk\" (UniqueName: \"kubernetes.io/projected/32a213de-d460-4760-bcb7-afdf18c0cd8c-kube-api-access-bn7mk\") pod \"collect-profiles-29497260-ssxmf\" (UID: \"32a213de-d460-4760-bcb7-afdf18c0cd8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-ssxmf" Jan 31 05:00:00 crc kubenswrapper[4827]: I0131 05:00:00.352343 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32a213de-d460-4760-bcb7-afdf18c0cd8c-config-volume\") pod \"collect-profiles-29497260-ssxmf\" (UID: \"32a213de-d460-4760-bcb7-afdf18c0cd8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-ssxmf" Jan 31 05:00:00 crc kubenswrapper[4827]: I0131 05:00:00.352413 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn7mk\" (UniqueName: \"kubernetes.io/projected/32a213de-d460-4760-bcb7-afdf18c0cd8c-kube-api-access-bn7mk\") pod \"collect-profiles-29497260-ssxmf\" (UID: \"32a213de-d460-4760-bcb7-afdf18c0cd8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-ssxmf" Jan 31 05:00:00 crc kubenswrapper[4827]: I0131 05:00:00.352539 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32a213de-d460-4760-bcb7-afdf18c0cd8c-secret-volume\") pod \"collect-profiles-29497260-ssxmf\" (UID: \"32a213de-d460-4760-bcb7-afdf18c0cd8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-ssxmf" Jan 31 05:00:00 crc kubenswrapper[4827]: I0131 05:00:00.353784 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32a213de-d460-4760-bcb7-afdf18c0cd8c-config-volume\") pod \"collect-profiles-29497260-ssxmf\" (UID: \"32a213de-d460-4760-bcb7-afdf18c0cd8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-ssxmf" Jan 31 05:00:00 crc kubenswrapper[4827]: I0131 05:00:00.358544 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32a213de-d460-4760-bcb7-afdf18c0cd8c-secret-volume\") pod \"collect-profiles-29497260-ssxmf\" (UID: \"32a213de-d460-4760-bcb7-afdf18c0cd8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-ssxmf" Jan 31 05:00:00 crc kubenswrapper[4827]: I0131 05:00:00.375550 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn7mk\" (UniqueName: \"kubernetes.io/projected/32a213de-d460-4760-bcb7-afdf18c0cd8c-kube-api-access-bn7mk\") pod \"collect-profiles-29497260-ssxmf\" (UID: \"32a213de-d460-4760-bcb7-afdf18c0cd8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-ssxmf" Jan 31 05:00:00 crc kubenswrapper[4827]: I0131 05:00:00.524109 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-ssxmf" Jan 31 05:00:00 crc kubenswrapper[4827]: I0131 05:00:00.960169 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497260-ssxmf"] Jan 31 05:00:01 crc kubenswrapper[4827]: I0131 05:00:01.618772 4827 generic.go:334] "Generic (PLEG): container finished" podID="32a213de-d460-4760-bcb7-afdf18c0cd8c" containerID="35f6be7770470950e0661d7b51776cdbcea8cf7c427da3ee13ea57511d77ef07" exitCode=0 Jan 31 05:00:01 crc kubenswrapper[4827]: I0131 05:00:01.619220 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-ssxmf" event={"ID":"32a213de-d460-4760-bcb7-afdf18c0cd8c","Type":"ContainerDied","Data":"35f6be7770470950e0661d7b51776cdbcea8cf7c427da3ee13ea57511d77ef07"} Jan 31 05:00:01 crc kubenswrapper[4827]: I0131 05:00:01.619428 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-ssxmf" event={"ID":"32a213de-d460-4760-bcb7-afdf18c0cd8c","Type":"ContainerStarted","Data":"a07affef88e0ac8465523e2f13556e2b01b9a5115d07894843fe4bd7264b8c5b"} Jan 31 05:00:03 crc kubenswrapper[4827]: I0131 05:00:03.069233 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-ssxmf" Jan 31 05:00:03 crc kubenswrapper[4827]: I0131 05:00:03.211737 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32a213de-d460-4760-bcb7-afdf18c0cd8c-config-volume\") pod \"32a213de-d460-4760-bcb7-afdf18c0cd8c\" (UID: \"32a213de-d460-4760-bcb7-afdf18c0cd8c\") " Jan 31 05:00:03 crc kubenswrapper[4827]: I0131 05:00:03.211983 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn7mk\" (UniqueName: \"kubernetes.io/projected/32a213de-d460-4760-bcb7-afdf18c0cd8c-kube-api-access-bn7mk\") pod \"32a213de-d460-4760-bcb7-afdf18c0cd8c\" (UID: \"32a213de-d460-4760-bcb7-afdf18c0cd8c\") " Jan 31 05:00:03 crc kubenswrapper[4827]: I0131 05:00:03.212031 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32a213de-d460-4760-bcb7-afdf18c0cd8c-secret-volume\") pod \"32a213de-d460-4760-bcb7-afdf18c0cd8c\" (UID: \"32a213de-d460-4760-bcb7-afdf18c0cd8c\") " Jan 31 05:00:03 crc kubenswrapper[4827]: I0131 05:00:03.214517 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32a213de-d460-4760-bcb7-afdf18c0cd8c-config-volume" (OuterVolumeSpecName: "config-volume") pod "32a213de-d460-4760-bcb7-afdf18c0cd8c" (UID: "32a213de-d460-4760-bcb7-afdf18c0cd8c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:03 crc kubenswrapper[4827]: I0131 05:00:03.220029 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a213de-d460-4760-bcb7-afdf18c0cd8c-kube-api-access-bn7mk" (OuterVolumeSpecName: "kube-api-access-bn7mk") pod "32a213de-d460-4760-bcb7-afdf18c0cd8c" (UID: "32a213de-d460-4760-bcb7-afdf18c0cd8c"). InnerVolumeSpecName "kube-api-access-bn7mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:03 crc kubenswrapper[4827]: I0131 05:00:03.237760 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a213de-d460-4760-bcb7-afdf18c0cd8c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "32a213de-d460-4760-bcb7-afdf18c0cd8c" (UID: "32a213de-d460-4760-bcb7-afdf18c0cd8c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:00:03 crc kubenswrapper[4827]: I0131 05:00:03.317641 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn7mk\" (UniqueName: \"kubernetes.io/projected/32a213de-d460-4760-bcb7-afdf18c0cd8c-kube-api-access-bn7mk\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:03 crc kubenswrapper[4827]: I0131 05:00:03.317686 4827 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32a213de-d460-4760-bcb7-afdf18c0cd8c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:03 crc kubenswrapper[4827]: I0131 05:00:03.317701 4827 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32a213de-d460-4760-bcb7-afdf18c0cd8c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:03 crc kubenswrapper[4827]: I0131 05:00:03.637806 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-ssxmf" event={"ID":"32a213de-d460-4760-bcb7-afdf18c0cd8c","Type":"ContainerDied","Data":"a07affef88e0ac8465523e2f13556e2b01b9a5115d07894843fe4bd7264b8c5b"} Jan 31 05:00:03 crc kubenswrapper[4827]: I0131 05:00:03.637913 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a07affef88e0ac8465523e2f13556e2b01b9a5115d07894843fe4bd7264b8c5b" Jan 31 05:00:03 crc kubenswrapper[4827]: I0131 05:00:03.637948 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-ssxmf" Jan 31 05:00:04 crc kubenswrapper[4827]: I0131 05:00:04.109896 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 05:00:04 crc kubenswrapper[4827]: E0131 05:00:04.110239 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:00:04 crc kubenswrapper[4827]: I0131 05:00:04.157964 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497215-fsq2k"] Jan 31 05:00:04 crc kubenswrapper[4827]: I0131 05:00:04.167210 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497215-fsq2k"] Jan 31 05:00:06 crc kubenswrapper[4827]: I0131 05:00:06.125838 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b91d70fa-004b-4553-9973-e7a67f721e9f" path="/var/lib/kubelet/pods/b91d70fa-004b-4553-9973-e7a67f721e9f/volumes" Jan 31 05:00:19 crc kubenswrapper[4827]: I0131 05:00:19.110538 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 05:00:19 crc kubenswrapper[4827]: E0131 05:00:19.111258 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:00:32 crc kubenswrapper[4827]: I0131 05:00:32.117227 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 05:00:32 crc kubenswrapper[4827]: E0131 05:00:32.118579 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:00:43 crc kubenswrapper[4827]: I0131 05:00:43.110008 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 05:00:43 crc kubenswrapper[4827]: E0131 05:00:43.110738 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:00:54 crc kubenswrapper[4827]: I0131 05:00:54.109859 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 05:00:54 crc kubenswrapper[4827]: E0131 05:00:54.111457 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:01:00 crc kubenswrapper[4827]: I0131 05:01:00.159441 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29497261-5x5xw"] Jan 31 05:01:00 crc kubenswrapper[4827]: E0131 05:01:00.160337 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a213de-d460-4760-bcb7-afdf18c0cd8c" containerName="collect-profiles" Jan 31 05:01:00 crc kubenswrapper[4827]: I0131 05:01:00.160350 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a213de-d460-4760-bcb7-afdf18c0cd8c" containerName="collect-profiles" Jan 31 05:01:00 crc kubenswrapper[4827]: I0131 05:01:00.160516 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a213de-d460-4760-bcb7-afdf18c0cd8c" containerName="collect-profiles" Jan 31 05:01:00 crc kubenswrapper[4827]: I0131 05:01:00.161320 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497261-5x5xw" Jan 31 05:01:00 crc kubenswrapper[4827]: I0131 05:01:00.167851 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29497261-5x5xw"] Jan 31 05:01:00 crc kubenswrapper[4827]: I0131 05:01:00.275817 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-combined-ca-bundle\") pod \"keystone-cron-29497261-5x5xw\" (UID: \"40c636c4-b73c-42b5-87f0-dd2d138bf0c1\") " pod="openstack/keystone-cron-29497261-5x5xw" Jan 31 05:01:00 crc kubenswrapper[4827]: I0131 05:01:00.276213 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-fernet-keys\") pod \"keystone-cron-29497261-5x5xw\" (UID: \"40c636c4-b73c-42b5-87f0-dd2d138bf0c1\") " pod="openstack/keystone-cron-29497261-5x5xw" Jan 31 05:01:00 crc kubenswrapper[4827]: I0131 05:01:00.276282 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-config-data\") pod \"keystone-cron-29497261-5x5xw\" (UID: \"40c636c4-b73c-42b5-87f0-dd2d138bf0c1\") " pod="openstack/keystone-cron-29497261-5x5xw" Jan 31 05:01:00 crc kubenswrapper[4827]: I0131 05:01:00.276309 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd8fw\" (UniqueName: \"kubernetes.io/projected/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-kube-api-access-nd8fw\") pod \"keystone-cron-29497261-5x5xw\" (UID: \"40c636c4-b73c-42b5-87f0-dd2d138bf0c1\") " pod="openstack/keystone-cron-29497261-5x5xw" Jan 31 05:01:00 crc kubenswrapper[4827]: I0131 05:01:00.377436 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-combined-ca-bundle\") pod \"keystone-cron-29497261-5x5xw\" (UID: \"40c636c4-b73c-42b5-87f0-dd2d138bf0c1\") " pod="openstack/keystone-cron-29497261-5x5xw" Jan 31 05:01:00 crc kubenswrapper[4827]: I0131 05:01:00.377503 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-fernet-keys\") pod \"keystone-cron-29497261-5x5xw\" (UID: \"40c636c4-b73c-42b5-87f0-dd2d138bf0c1\") " pod="openstack/keystone-cron-29497261-5x5xw" Jan 31 05:01:00 crc kubenswrapper[4827]: I0131 05:01:00.377568 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-config-data\") pod \"keystone-cron-29497261-5x5xw\" (UID: \"40c636c4-b73c-42b5-87f0-dd2d138bf0c1\") " pod="openstack/keystone-cron-29497261-5x5xw" Jan 31 05:01:00 crc kubenswrapper[4827]: I0131 05:01:00.377591 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd8fw\" (UniqueName: \"kubernetes.io/projected/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-kube-api-access-nd8fw\") pod \"keystone-cron-29497261-5x5xw\" (UID: \"40c636c4-b73c-42b5-87f0-dd2d138bf0c1\") " pod="openstack/keystone-cron-29497261-5x5xw" Jan 31 05:01:00 crc kubenswrapper[4827]: I0131 05:01:00.389619 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-fernet-keys\") pod \"keystone-cron-29497261-5x5xw\" (UID: \"40c636c4-b73c-42b5-87f0-dd2d138bf0c1\") " pod="openstack/keystone-cron-29497261-5x5xw" Jan 31 05:01:00 crc kubenswrapper[4827]: I0131 05:01:00.389658 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-combined-ca-bundle\") pod \"keystone-cron-29497261-5x5xw\" (UID: \"40c636c4-b73c-42b5-87f0-dd2d138bf0c1\") " pod="openstack/keystone-cron-29497261-5x5xw" Jan 31 05:01:00 crc kubenswrapper[4827]: I0131 05:01:00.389664 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-config-data\") pod \"keystone-cron-29497261-5x5xw\" (UID: \"40c636c4-b73c-42b5-87f0-dd2d138bf0c1\") " pod="openstack/keystone-cron-29497261-5x5xw" Jan 31 05:01:00 crc kubenswrapper[4827]: I0131 05:01:00.392323 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd8fw\" (UniqueName: \"kubernetes.io/projected/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-kube-api-access-nd8fw\") pod \"keystone-cron-29497261-5x5xw\" (UID: \"40c636c4-b73c-42b5-87f0-dd2d138bf0c1\") " pod="openstack/keystone-cron-29497261-5x5xw" Jan 31 05:01:00 crc kubenswrapper[4827]: I0131 05:01:00.496604 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497261-5x5xw" Jan 31 05:01:01 crc kubenswrapper[4827]: I0131 05:01:01.027385 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29497261-5x5xw"] Jan 31 05:01:01 crc kubenswrapper[4827]: I0131 05:01:01.208986 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497261-5x5xw" event={"ID":"40c636c4-b73c-42b5-87f0-dd2d138bf0c1","Type":"ContainerStarted","Data":"c6f410daac1707ccde2fa2eaeb81eb0f9221435f529c34a88cc886d9f15702af"} Jan 31 05:01:01 crc kubenswrapper[4827]: I0131 05:01:01.210388 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497261-5x5xw" event={"ID":"40c636c4-b73c-42b5-87f0-dd2d138bf0c1","Type":"ContainerStarted","Data":"f86c3aec48b0e3152cc82185d83f791e47bf5d767d04f5ba4ab631e11bbb5966"} Jan 31 05:01:03 crc kubenswrapper[4827]: I0131 05:01:03.664971 4827 scope.go:117] "RemoveContainer" containerID="fc627a06af1e44e13fac77f76ecf4bb630356dc9fd3df5f1ab8c154d5f3538c7" Jan 31 05:01:04 crc kubenswrapper[4827]: I0131 05:01:04.249325 4827 generic.go:334] "Generic (PLEG): container finished" podID="40c636c4-b73c-42b5-87f0-dd2d138bf0c1" containerID="c6f410daac1707ccde2fa2eaeb81eb0f9221435f529c34a88cc886d9f15702af" exitCode=0 Jan 31 05:01:04 crc kubenswrapper[4827]: I0131 05:01:04.249528 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497261-5x5xw" event={"ID":"40c636c4-b73c-42b5-87f0-dd2d138bf0c1","Type":"ContainerDied","Data":"c6f410daac1707ccde2fa2eaeb81eb0f9221435f529c34a88cc886d9f15702af"} Jan 31 05:01:05 crc kubenswrapper[4827]: I0131 05:01:05.721745 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497261-5x5xw" Jan 31 05:01:05 crc kubenswrapper[4827]: I0131 05:01:05.913971 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-config-data\") pod \"40c636c4-b73c-42b5-87f0-dd2d138bf0c1\" (UID: \"40c636c4-b73c-42b5-87f0-dd2d138bf0c1\") " Jan 31 05:01:05 crc kubenswrapper[4827]: I0131 05:01:05.914133 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-fernet-keys\") pod \"40c636c4-b73c-42b5-87f0-dd2d138bf0c1\" (UID: \"40c636c4-b73c-42b5-87f0-dd2d138bf0c1\") " Jan 31 05:01:05 crc kubenswrapper[4827]: I0131 05:01:05.914358 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd8fw\" (UniqueName: \"kubernetes.io/projected/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-kube-api-access-nd8fw\") pod \"40c636c4-b73c-42b5-87f0-dd2d138bf0c1\" (UID: \"40c636c4-b73c-42b5-87f0-dd2d138bf0c1\") " Jan 31 05:01:05 crc kubenswrapper[4827]: I0131 05:01:05.914467 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-combined-ca-bundle\") pod \"40c636c4-b73c-42b5-87f0-dd2d138bf0c1\" (UID: \"40c636c4-b73c-42b5-87f0-dd2d138bf0c1\") " Jan 31 05:01:05 crc kubenswrapper[4827]: I0131 05:01:05.926717 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "40c636c4-b73c-42b5-87f0-dd2d138bf0c1" (UID: "40c636c4-b73c-42b5-87f0-dd2d138bf0c1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:05 crc kubenswrapper[4827]: I0131 05:01:05.927250 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-kube-api-access-nd8fw" (OuterVolumeSpecName: "kube-api-access-nd8fw") pod "40c636c4-b73c-42b5-87f0-dd2d138bf0c1" (UID: "40c636c4-b73c-42b5-87f0-dd2d138bf0c1"). InnerVolumeSpecName "kube-api-access-nd8fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:01:05 crc kubenswrapper[4827]: I0131 05:01:05.945437 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40c636c4-b73c-42b5-87f0-dd2d138bf0c1" (UID: "40c636c4-b73c-42b5-87f0-dd2d138bf0c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:05 crc kubenswrapper[4827]: I0131 05:01:05.988527 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-config-data" (OuterVolumeSpecName: "config-data") pod "40c636c4-b73c-42b5-87f0-dd2d138bf0c1" (UID: "40c636c4-b73c-42b5-87f0-dd2d138bf0c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:06 crc kubenswrapper[4827]: I0131 05:01:06.019844 4827 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:06 crc kubenswrapper[4827]: I0131 05:01:06.020493 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:06 crc kubenswrapper[4827]: I0131 05:01:06.020529 4827 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:06 crc kubenswrapper[4827]: I0131 05:01:06.020547 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd8fw\" (UniqueName: \"kubernetes.io/projected/40c636c4-b73c-42b5-87f0-dd2d138bf0c1-kube-api-access-nd8fw\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:06 crc kubenswrapper[4827]: I0131 05:01:06.265091 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497261-5x5xw" event={"ID":"40c636c4-b73c-42b5-87f0-dd2d138bf0c1","Type":"ContainerDied","Data":"f86c3aec48b0e3152cc82185d83f791e47bf5d767d04f5ba4ab631e11bbb5966"} Jan 31 05:01:06 crc kubenswrapper[4827]: I0131 05:01:06.265393 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f86c3aec48b0e3152cc82185d83f791e47bf5d767d04f5ba4ab631e11bbb5966" Jan 31 05:01:06 crc kubenswrapper[4827]: I0131 05:01:06.265138 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497261-5x5xw" Jan 31 05:01:08 crc kubenswrapper[4827]: I0131 05:01:08.115247 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 05:01:08 crc kubenswrapper[4827]: E0131 05:01:08.115525 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:01:22 crc kubenswrapper[4827]: I0131 05:01:22.110616 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 05:01:22 crc kubenswrapper[4827]: E0131 05:01:22.111585 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:01:36 crc kubenswrapper[4827]: I0131 05:01:36.110168 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 05:01:36 crc kubenswrapper[4827]: E0131 05:01:36.111129 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:01:49 crc kubenswrapper[4827]: I0131 05:01:49.110407 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 05:01:49 crc kubenswrapper[4827]: E0131 05:01:49.111406 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:02:02 crc kubenswrapper[4827]: I0131 05:02:02.110736 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 05:02:02 crc kubenswrapper[4827]: E0131 05:02:02.111610 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:02:16 crc kubenswrapper[4827]: I0131 05:02:16.111595 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 05:02:16 crc kubenswrapper[4827]: E0131 05:02:16.112286 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:02:21 crc kubenswrapper[4827]: I0131 05:02:21.866191 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nsjlb"] Jan 31 05:02:21 crc kubenswrapper[4827]: E0131 05:02:21.867367 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40c636c4-b73c-42b5-87f0-dd2d138bf0c1" containerName="keystone-cron" Jan 31 05:02:21 crc kubenswrapper[4827]: I0131 05:02:21.867388 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="40c636c4-b73c-42b5-87f0-dd2d138bf0c1" containerName="keystone-cron" Jan 31 05:02:21 crc kubenswrapper[4827]: I0131 05:02:21.867643 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="40c636c4-b73c-42b5-87f0-dd2d138bf0c1" containerName="keystone-cron" Jan 31 05:02:21 crc kubenswrapper[4827]: I0131 05:02:21.869593 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsjlb" Jan 31 05:02:21 crc kubenswrapper[4827]: I0131 05:02:21.879122 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nsjlb"] Jan 31 05:02:21 crc kubenswrapper[4827]: I0131 05:02:21.997930 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0293800b-594a-4021-8e11-77100a2ee7b4-catalog-content\") pod \"community-operators-nsjlb\" (UID: \"0293800b-594a-4021-8e11-77100a2ee7b4\") " pod="openshift-marketplace/community-operators-nsjlb" Jan 31 05:02:21 crc kubenswrapper[4827]: I0131 05:02:21.998025 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0293800b-594a-4021-8e11-77100a2ee7b4-utilities\") pod \"community-operators-nsjlb\" (UID: \"0293800b-594a-4021-8e11-77100a2ee7b4\") " pod="openshift-marketplace/community-operators-nsjlb" Jan 31 05:02:21 crc kubenswrapper[4827]: I0131 05:02:21.998162 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg57x\" (UniqueName: \"kubernetes.io/projected/0293800b-594a-4021-8e11-77100a2ee7b4-kube-api-access-sg57x\") pod \"community-operators-nsjlb\" (UID: \"0293800b-594a-4021-8e11-77100a2ee7b4\") " pod="openshift-marketplace/community-operators-nsjlb" Jan 31 05:02:22 crc kubenswrapper[4827]: I0131 05:02:22.099616 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0293800b-594a-4021-8e11-77100a2ee7b4-catalog-content\") pod \"community-operators-nsjlb\" (UID: \"0293800b-594a-4021-8e11-77100a2ee7b4\") " pod="openshift-marketplace/community-operators-nsjlb" Jan 31 05:02:22 crc kubenswrapper[4827]: I0131 05:02:22.099668 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0293800b-594a-4021-8e11-77100a2ee7b4-utilities\") pod \"community-operators-nsjlb\" (UID: \"0293800b-594a-4021-8e11-77100a2ee7b4\") " pod="openshift-marketplace/community-operators-nsjlb" Jan 31 05:02:22 crc kubenswrapper[4827]: I0131 05:02:22.099724 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg57x\" (UniqueName: \"kubernetes.io/projected/0293800b-594a-4021-8e11-77100a2ee7b4-kube-api-access-sg57x\") pod \"community-operators-nsjlb\" (UID: \"0293800b-594a-4021-8e11-77100a2ee7b4\") " pod="openshift-marketplace/community-operators-nsjlb" Jan 31 05:02:22 crc kubenswrapper[4827]: I0131 05:02:22.100675 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0293800b-594a-4021-8e11-77100a2ee7b4-catalog-content\") pod \"community-operators-nsjlb\" (UID: \"0293800b-594a-4021-8e11-77100a2ee7b4\") " pod="openshift-marketplace/community-operators-nsjlb" Jan 31 05:02:22 crc kubenswrapper[4827]: I0131 05:02:22.100732 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0293800b-594a-4021-8e11-77100a2ee7b4-utilities\") pod \"community-operators-nsjlb\" (UID: \"0293800b-594a-4021-8e11-77100a2ee7b4\") " pod="openshift-marketplace/community-operators-nsjlb" Jan 31 05:02:22 crc kubenswrapper[4827]: I0131 05:02:22.127798 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg57x\" (UniqueName: \"kubernetes.io/projected/0293800b-594a-4021-8e11-77100a2ee7b4-kube-api-access-sg57x\") pod \"community-operators-nsjlb\" (UID: \"0293800b-594a-4021-8e11-77100a2ee7b4\") " pod="openshift-marketplace/community-operators-nsjlb" Jan 31 05:02:22 crc kubenswrapper[4827]: I0131 05:02:22.193741 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsjlb" Jan 31 05:02:22 crc kubenswrapper[4827]: I0131 05:02:22.731802 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nsjlb"] Jan 31 05:02:22 crc kubenswrapper[4827]: I0131 05:02:22.949234 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsjlb" event={"ID":"0293800b-594a-4021-8e11-77100a2ee7b4","Type":"ContainerStarted","Data":"5461085b605ac0eed508517472e20c79190cb9596f548ea2cfa074c7e2ec6ed5"} Jan 31 05:02:22 crc kubenswrapper[4827]: I0131 05:02:22.949280 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsjlb" event={"ID":"0293800b-594a-4021-8e11-77100a2ee7b4","Type":"ContainerStarted","Data":"d491fa5186a8ee911007d53daae4b695996de6f452f67039e8aff8d4a083cebb"} Jan 31 05:02:23 crc kubenswrapper[4827]: I0131 05:02:23.957482 4827 generic.go:334] "Generic (PLEG): container finished" podID="0293800b-594a-4021-8e11-77100a2ee7b4" containerID="5461085b605ac0eed508517472e20c79190cb9596f548ea2cfa074c7e2ec6ed5" exitCode=0 Jan 31 05:02:23 crc kubenswrapper[4827]: I0131 05:02:23.957535 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsjlb" event={"ID":"0293800b-594a-4021-8e11-77100a2ee7b4","Type":"ContainerDied","Data":"5461085b605ac0eed508517472e20c79190cb9596f548ea2cfa074c7e2ec6ed5"} Jan 31 05:02:23 crc kubenswrapper[4827]: I0131 05:02:23.959227 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 05:02:25 crc kubenswrapper[4827]: I0131 05:02:25.977232 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsjlb" event={"ID":"0293800b-594a-4021-8e11-77100a2ee7b4","Type":"ContainerStarted","Data":"d4b9bec8d05d27faebe2a86b9da0bfd60ee9ccc37f7166e43c484bf1cf8a38b9"} Jan 31 05:02:26 crc kubenswrapper[4827]: I0131 05:02:26.987605 4827 generic.go:334] "Generic (PLEG): container finished" podID="0293800b-594a-4021-8e11-77100a2ee7b4" containerID="d4b9bec8d05d27faebe2a86b9da0bfd60ee9ccc37f7166e43c484bf1cf8a38b9" exitCode=0 Jan 31 05:02:26 crc kubenswrapper[4827]: I0131 05:02:26.987719 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsjlb" event={"ID":"0293800b-594a-4021-8e11-77100a2ee7b4","Type":"ContainerDied","Data":"d4b9bec8d05d27faebe2a86b9da0bfd60ee9ccc37f7166e43c484bf1cf8a38b9"} Jan 31 05:02:27 crc kubenswrapper[4827]: I0131 05:02:27.997668 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsjlb" event={"ID":"0293800b-594a-4021-8e11-77100a2ee7b4","Type":"ContainerStarted","Data":"ccddde922311e8f49ed56e498fb634d8c80337fa812802fee33dee46aeaa8277"} Jan 31 05:02:28 crc kubenswrapper[4827]: I0131 05:02:28.020837 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nsjlb" podStartSLOduration=3.577131331 podStartE2EDuration="7.020813784s" podCreationTimestamp="2026-01-31 05:02:21 +0000 UTC" firstStartedPulling="2026-01-31 05:02:23.95893891 +0000 UTC m=+4536.646019369" lastFinishedPulling="2026-01-31 05:02:27.402621373 +0000 UTC m=+4540.089701822" observedRunningTime="2026-01-31 05:02:28.016654693 +0000 UTC m=+4540.703735142" watchObservedRunningTime="2026-01-31 05:02:28.020813784 +0000 UTC m=+4540.707894243" Jan 31 05:02:30 crc kubenswrapper[4827]: I0131 05:02:30.110756 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 05:02:30 crc kubenswrapper[4827]: E0131 05:02:30.111574 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:02:32 crc kubenswrapper[4827]: I0131 05:02:32.193875 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nsjlb" Jan 31 05:02:32 crc kubenswrapper[4827]: I0131 05:02:32.194390 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nsjlb" Jan 31 05:02:32 crc kubenswrapper[4827]: I0131 05:02:32.237456 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nsjlb" Jan 31 05:02:33 crc kubenswrapper[4827]: I0131 05:02:33.098354 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nsjlb" Jan 31 05:02:33 crc kubenswrapper[4827]: I0131 05:02:33.150149 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nsjlb"] Jan 31 05:02:35 crc kubenswrapper[4827]: I0131 05:02:35.059123 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nsjlb" podUID="0293800b-594a-4021-8e11-77100a2ee7b4" containerName="registry-server" containerID="cri-o://ccddde922311e8f49ed56e498fb634d8c80337fa812802fee33dee46aeaa8277" gracePeriod=2 Jan 31 05:02:36 crc kubenswrapper[4827]: I0131 05:02:36.067992 4827 generic.go:334] "Generic (PLEG): container finished" podID="0293800b-594a-4021-8e11-77100a2ee7b4" containerID="ccddde922311e8f49ed56e498fb634d8c80337fa812802fee33dee46aeaa8277" exitCode=0 Jan 31 05:02:36 crc kubenswrapper[4827]: I0131 05:02:36.068055 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsjlb" event={"ID":"0293800b-594a-4021-8e11-77100a2ee7b4","Type":"ContainerDied","Data":"ccddde922311e8f49ed56e498fb634d8c80337fa812802fee33dee46aeaa8277"} Jan 31 05:02:36 crc kubenswrapper[4827]: I0131 05:02:36.068509 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsjlb" event={"ID":"0293800b-594a-4021-8e11-77100a2ee7b4","Type":"ContainerDied","Data":"d491fa5186a8ee911007d53daae4b695996de6f452f67039e8aff8d4a083cebb"} Jan 31 05:02:36 crc kubenswrapper[4827]: I0131 05:02:36.068525 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d491fa5186a8ee911007d53daae4b695996de6f452f67039e8aff8d4a083cebb" Jan 31 05:02:36 crc kubenswrapper[4827]: I0131 05:02:36.108441 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsjlb" Jan 31 05:02:36 crc kubenswrapper[4827]: I0131 05:02:36.198621 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0293800b-594a-4021-8e11-77100a2ee7b4-catalog-content\") pod \"0293800b-594a-4021-8e11-77100a2ee7b4\" (UID: \"0293800b-594a-4021-8e11-77100a2ee7b4\") " Jan 31 05:02:36 crc kubenswrapper[4827]: I0131 05:02:36.198676 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg57x\" (UniqueName: \"kubernetes.io/projected/0293800b-594a-4021-8e11-77100a2ee7b4-kube-api-access-sg57x\") pod \"0293800b-594a-4021-8e11-77100a2ee7b4\" (UID: \"0293800b-594a-4021-8e11-77100a2ee7b4\") " Jan 31 05:02:36 crc kubenswrapper[4827]: I0131 05:02:36.198817 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0293800b-594a-4021-8e11-77100a2ee7b4-utilities\") pod \"0293800b-594a-4021-8e11-77100a2ee7b4\" (UID: \"0293800b-594a-4021-8e11-77100a2ee7b4\") " Jan 31 05:02:36 crc kubenswrapper[4827]: I0131 05:02:36.199788 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0293800b-594a-4021-8e11-77100a2ee7b4-utilities" (OuterVolumeSpecName: "utilities") pod "0293800b-594a-4021-8e11-77100a2ee7b4" (UID: "0293800b-594a-4021-8e11-77100a2ee7b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:02:36 crc kubenswrapper[4827]: I0131 05:02:36.205404 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0293800b-594a-4021-8e11-77100a2ee7b4-kube-api-access-sg57x" (OuterVolumeSpecName: "kube-api-access-sg57x") pod "0293800b-594a-4021-8e11-77100a2ee7b4" (UID: "0293800b-594a-4021-8e11-77100a2ee7b4"). InnerVolumeSpecName "kube-api-access-sg57x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:36 crc kubenswrapper[4827]: I0131 05:02:36.249626 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0293800b-594a-4021-8e11-77100a2ee7b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0293800b-594a-4021-8e11-77100a2ee7b4" (UID: "0293800b-594a-4021-8e11-77100a2ee7b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:02:36 crc kubenswrapper[4827]: I0131 05:02:36.301063 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0293800b-594a-4021-8e11-77100a2ee7b4-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:36 crc kubenswrapper[4827]: I0131 05:02:36.301102 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0293800b-594a-4021-8e11-77100a2ee7b4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:36 crc kubenswrapper[4827]: I0131 05:02:36.301113 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg57x\" (UniqueName: \"kubernetes.io/projected/0293800b-594a-4021-8e11-77100a2ee7b4-kube-api-access-sg57x\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:37 crc kubenswrapper[4827]: I0131 05:02:37.077312 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsjlb" Jan 31 05:02:37 crc kubenswrapper[4827]: I0131 05:02:37.122546 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nsjlb"] Jan 31 05:02:37 crc kubenswrapper[4827]: I0131 05:02:37.129556 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nsjlb"] Jan 31 05:02:38 crc kubenswrapper[4827]: I0131 05:02:38.127865 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0293800b-594a-4021-8e11-77100a2ee7b4" path="/var/lib/kubelet/pods/0293800b-594a-4021-8e11-77100a2ee7b4/volumes" Jan 31 05:02:42 crc kubenswrapper[4827]: I0131 05:02:42.111611 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 05:02:42 crc kubenswrapper[4827]: E0131 05:02:42.112246 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:02:54 crc kubenswrapper[4827]: I0131 05:02:54.110371 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 05:02:54 crc kubenswrapper[4827]: E0131 05:02:54.111110 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:03:07 crc kubenswrapper[4827]: I0131 05:03:07.110123 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 05:03:07 crc kubenswrapper[4827]: E0131 05:03:07.110924 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:03:20 crc kubenswrapper[4827]: I0131 05:03:20.110089 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 05:03:20 crc kubenswrapper[4827]: E0131 05:03:20.111135 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:03:34 crc kubenswrapper[4827]: I0131 05:03:34.111564 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 05:03:34 crc kubenswrapper[4827]: E0131 05:03:34.113082 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:03:43 crc kubenswrapper[4827]: I0131 05:03:43.707735 4827 generic.go:334] "Generic (PLEG): container finished" podID="9267ff6a-541b-4297-87e4-fb6095cece6e" containerID="c51427307b06241a749e79ddc293fd458cd79d2348a99dd24788debc4639ba3f" exitCode=0 Jan 31 05:03:43 crc kubenswrapper[4827]: I0131 05:03:43.707798 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9267ff6a-541b-4297-87e4-fb6095cece6e","Type":"ContainerDied","Data":"c51427307b06241a749e79ddc293fd458cd79d2348a99dd24788debc4639ba3f"} Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.169777 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.268545 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"9267ff6a-541b-4297-87e4-fb6095cece6e\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.268675 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9267ff6a-541b-4297-87e4-fb6095cece6e-ca-certs\") pod \"9267ff6a-541b-4297-87e4-fb6095cece6e\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.268706 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9267ff6a-541b-4297-87e4-fb6095cece6e-test-operator-ephemeral-workdir\") pod \"9267ff6a-541b-4297-87e4-fb6095cece6e\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.268755 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjmxd\" (UniqueName: \"kubernetes.io/projected/9267ff6a-541b-4297-87e4-fb6095cece6e-kube-api-access-zjmxd\") pod \"9267ff6a-541b-4297-87e4-fb6095cece6e\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.268843 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9267ff6a-541b-4297-87e4-fb6095cece6e-openstack-config\") pod \"9267ff6a-541b-4297-87e4-fb6095cece6e\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.268905 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9267ff6a-541b-4297-87e4-fb6095cece6e-openstack-config-secret\") pod \"9267ff6a-541b-4297-87e4-fb6095cece6e\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.268952 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9267ff6a-541b-4297-87e4-fb6095cece6e-config-data\") pod \"9267ff6a-541b-4297-87e4-fb6095cece6e\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.269002 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9267ff6a-541b-4297-87e4-fb6095cece6e-ssh-key\") pod \"9267ff6a-541b-4297-87e4-fb6095cece6e\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.269056 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9267ff6a-541b-4297-87e4-fb6095cece6e-test-operator-ephemeral-temporary\") pod \"9267ff6a-541b-4297-87e4-fb6095cece6e\" (UID: \"9267ff6a-541b-4297-87e4-fb6095cece6e\") " Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.270228 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9267ff6a-541b-4297-87e4-fb6095cece6e-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "9267ff6a-541b-4297-87e4-fb6095cece6e" (UID: "9267ff6a-541b-4297-87e4-fb6095cece6e"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.270647 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9267ff6a-541b-4297-87e4-fb6095cece6e-config-data" (OuterVolumeSpecName: "config-data") pod "9267ff6a-541b-4297-87e4-fb6095cece6e" (UID: "9267ff6a-541b-4297-87e4-fb6095cece6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.274586 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9267ff6a-541b-4297-87e4-fb6095cece6e-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "9267ff6a-541b-4297-87e4-fb6095cece6e" (UID: "9267ff6a-541b-4297-87e4-fb6095cece6e"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.274992 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "9267ff6a-541b-4297-87e4-fb6095cece6e" (UID: "9267ff6a-541b-4297-87e4-fb6095cece6e"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.290115 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9267ff6a-541b-4297-87e4-fb6095cece6e-kube-api-access-zjmxd" (OuterVolumeSpecName: "kube-api-access-zjmxd") pod "9267ff6a-541b-4297-87e4-fb6095cece6e" (UID: "9267ff6a-541b-4297-87e4-fb6095cece6e"). InnerVolumeSpecName "kube-api-access-zjmxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.300308 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9267ff6a-541b-4297-87e4-fb6095cece6e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9267ff6a-541b-4297-87e4-fb6095cece6e" (UID: "9267ff6a-541b-4297-87e4-fb6095cece6e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.304666 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9267ff6a-541b-4297-87e4-fb6095cece6e-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "9267ff6a-541b-4297-87e4-fb6095cece6e" (UID: "9267ff6a-541b-4297-87e4-fb6095cece6e"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.318865 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9267ff6a-541b-4297-87e4-fb6095cece6e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9267ff6a-541b-4297-87e4-fb6095cece6e" (UID: "9267ff6a-541b-4297-87e4-fb6095cece6e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.336389 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9267ff6a-541b-4297-87e4-fb6095cece6e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9267ff6a-541b-4297-87e4-fb6095cece6e" (UID: "9267ff6a-541b-4297-87e4-fb6095cece6e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.371438 4827 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9267ff6a-541b-4297-87e4-fb6095cece6e-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.371710 4827 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.371801 4827 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9267ff6a-541b-4297-87e4-fb6095cece6e-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.371896 4827 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9267ff6a-541b-4297-87e4-fb6095cece6e-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.371995 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjmxd\" (UniqueName: \"kubernetes.io/projected/9267ff6a-541b-4297-87e4-fb6095cece6e-kube-api-access-zjmxd\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.372074 4827 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9267ff6a-541b-4297-87e4-fb6095cece6e-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.372148 4827 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9267ff6a-541b-4297-87e4-fb6095cece6e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.372220 4827 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9267ff6a-541b-4297-87e4-fb6095cece6e-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.372293 4827 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9267ff6a-541b-4297-87e4-fb6095cece6e-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.393580 4827 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.474041 4827 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.733045 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9267ff6a-541b-4297-87e4-fb6095cece6e","Type":"ContainerDied","Data":"ce46168fe5e203a4b3a2db83b36e71d70b3ec44750ccd81bc7558159b28407f7"} Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.733087 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce46168fe5e203a4b3a2db83b36e71d70b3ec44750ccd81bc7558159b28407f7" Jan 31 05:03:45 crc kubenswrapper[4827]: I0131 05:03:45.733110 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 05:03:46 crc kubenswrapper[4827]: I0131 05:03:46.110678 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 05:03:46 crc kubenswrapper[4827]: E0131 05:03:46.111634 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:03:55 crc kubenswrapper[4827]: I0131 05:03:55.665974 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 05:03:55 crc kubenswrapper[4827]: E0131 05:03:55.667315 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0293800b-594a-4021-8e11-77100a2ee7b4" containerName="extract-utilities" Jan 31 05:03:55 crc kubenswrapper[4827]: I0131 05:03:55.667341 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0293800b-594a-4021-8e11-77100a2ee7b4" containerName="extract-utilities" Jan 31 05:03:55 crc kubenswrapper[4827]: E0131 05:03:55.667376 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0293800b-594a-4021-8e11-77100a2ee7b4" containerName="registry-server" Jan 31 05:03:55 crc kubenswrapper[4827]: I0131 05:03:55.667389 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0293800b-594a-4021-8e11-77100a2ee7b4" containerName="registry-server" Jan 31 05:03:55 crc kubenswrapper[4827]: E0131 05:03:55.667409 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9267ff6a-541b-4297-87e4-fb6095cece6e" containerName="tempest-tests-tempest-tests-runner" Jan 31 05:03:55 crc kubenswrapper[4827]: I0131 05:03:55.667422 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="9267ff6a-541b-4297-87e4-fb6095cece6e" containerName="tempest-tests-tempest-tests-runner" Jan 31 05:03:55 crc kubenswrapper[4827]: E0131 05:03:55.667442 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0293800b-594a-4021-8e11-77100a2ee7b4" containerName="extract-content" Jan 31 05:03:55 crc kubenswrapper[4827]: I0131 05:03:55.667453 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0293800b-594a-4021-8e11-77100a2ee7b4" containerName="extract-content" Jan 31 05:03:55 crc kubenswrapper[4827]: I0131 05:03:55.667770 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="9267ff6a-541b-4297-87e4-fb6095cece6e" containerName="tempest-tests-tempest-tests-runner" Jan 31 05:03:55 crc kubenswrapper[4827]: I0131 05:03:55.667821 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="0293800b-594a-4021-8e11-77100a2ee7b4" containerName="registry-server" Jan 31 05:03:55 crc kubenswrapper[4827]: I0131 05:03:55.668753 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 05:03:55 crc kubenswrapper[4827]: I0131 05:03:55.679509 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-4r7pc" Jan 31 05:03:55 crc kubenswrapper[4827]: I0131 05:03:55.680305 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 05:03:55 crc kubenswrapper[4827]: I0131 05:03:55.804527 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shbs9\" (UniqueName: \"kubernetes.io/projected/eb9cc63c-f48f-4ded-9ca7-b7167b27a5ad-kube-api-access-shbs9\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"eb9cc63c-f48f-4ded-9ca7-b7167b27a5ad\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 05:03:55 crc kubenswrapper[4827]: I0131 05:03:55.805020 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"eb9cc63c-f48f-4ded-9ca7-b7167b27a5ad\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 05:03:55 crc kubenswrapper[4827]: I0131 05:03:55.907462 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shbs9\" (UniqueName: \"kubernetes.io/projected/eb9cc63c-f48f-4ded-9ca7-b7167b27a5ad-kube-api-access-shbs9\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"eb9cc63c-f48f-4ded-9ca7-b7167b27a5ad\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 05:03:55 crc kubenswrapper[4827]: I0131 05:03:55.907734 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"eb9cc63c-f48f-4ded-9ca7-b7167b27a5ad\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 05:03:55 crc kubenswrapper[4827]: I0131 05:03:55.908519 4827 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"eb9cc63c-f48f-4ded-9ca7-b7167b27a5ad\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 05:03:55 crc kubenswrapper[4827]: I0131 05:03:55.931759 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shbs9\" (UniqueName: \"kubernetes.io/projected/eb9cc63c-f48f-4ded-9ca7-b7167b27a5ad-kube-api-access-shbs9\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"eb9cc63c-f48f-4ded-9ca7-b7167b27a5ad\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 05:03:55 crc kubenswrapper[4827]: I0131 05:03:55.976224 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"eb9cc63c-f48f-4ded-9ca7-b7167b27a5ad\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 05:03:56 crc kubenswrapper[4827]: I0131 05:03:56.003851 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 05:03:56 crc kubenswrapper[4827]: I0131 05:03:56.451519 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 05:03:56 crc kubenswrapper[4827]: I0131 05:03:56.851412 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"eb9cc63c-f48f-4ded-9ca7-b7167b27a5ad","Type":"ContainerStarted","Data":"c3ba9091b76980b1f25b6e2f5ef47e7f03e4bbfbb641e31e8651ffe9a8011b73"} Jan 31 05:03:57 crc kubenswrapper[4827]: I0131 05:03:57.864437 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"eb9cc63c-f48f-4ded-9ca7-b7167b27a5ad","Type":"ContainerStarted","Data":"d5bd16d90d49bdbb27939862c9f26f3b07158990944caed71173305d1e4089e9"} Jan 31 05:03:57 crc kubenswrapper[4827]: I0131 05:03:57.886031 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.033821872 podStartE2EDuration="2.886012999s" podCreationTimestamp="2026-01-31 05:03:55 +0000 UTC" firstStartedPulling="2026-01-31 05:03:56.454865638 +0000 UTC m=+4629.141946087" lastFinishedPulling="2026-01-31 05:03:57.307056755 +0000 UTC m=+4629.994137214" observedRunningTime="2026-01-31 05:03:57.884413099 +0000 UTC m=+4630.571493578" watchObservedRunningTime="2026-01-31 05:03:57.886012999 +0000 UTC m=+4630.573093448" Jan 31 05:03:59 crc kubenswrapper[4827]: I0131 05:03:59.111049 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 05:03:59 crc kubenswrapper[4827]: I0131 05:03:59.890670 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"8b862b8b1b774ed91171674cc9d08491ecba45701fdc6d37af4df3acdb0e0dc6"} Jan 31 05:04:20 crc kubenswrapper[4827]: I0131 05:04:20.029973 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tfhkk/must-gather-vnjmk"] Jan 31 05:04:20 crc kubenswrapper[4827]: I0131 05:04:20.032874 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfhkk/must-gather-vnjmk" Jan 31 05:04:20 crc kubenswrapper[4827]: I0131 05:04:20.034860 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tfhkk"/"default-dockercfg-85mhz" Jan 31 05:04:20 crc kubenswrapper[4827]: I0131 05:04:20.035477 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tfhkk"/"openshift-service-ca.crt" Jan 31 05:04:20 crc kubenswrapper[4827]: I0131 05:04:20.035650 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tfhkk"/"kube-root-ca.crt" Jan 31 05:04:20 crc kubenswrapper[4827]: I0131 05:04:20.041814 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tfhkk/must-gather-vnjmk"] Jan 31 05:04:20 crc kubenswrapper[4827]: I0131 05:04:20.172788 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c05540c0-c586-4204-988e-1d8e2a23bae6-must-gather-output\") pod \"must-gather-vnjmk\" (UID: \"c05540c0-c586-4204-988e-1d8e2a23bae6\") " pod="openshift-must-gather-tfhkk/must-gather-vnjmk" Jan 31 05:04:20 crc kubenswrapper[4827]: I0131 05:04:20.172875 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqfwd\" (UniqueName: \"kubernetes.io/projected/c05540c0-c586-4204-988e-1d8e2a23bae6-kube-api-access-kqfwd\") pod \"must-gather-vnjmk\" (UID: \"c05540c0-c586-4204-988e-1d8e2a23bae6\") " pod="openshift-must-gather-tfhkk/must-gather-vnjmk" Jan 31 05:04:20 crc kubenswrapper[4827]: I0131 05:04:20.274741 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c05540c0-c586-4204-988e-1d8e2a23bae6-must-gather-output\") pod \"must-gather-vnjmk\" (UID: \"c05540c0-c586-4204-988e-1d8e2a23bae6\") " pod="openshift-must-gather-tfhkk/must-gather-vnjmk" Jan 31 05:04:20 crc kubenswrapper[4827]: I0131 05:04:20.274852 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqfwd\" (UniqueName: \"kubernetes.io/projected/c05540c0-c586-4204-988e-1d8e2a23bae6-kube-api-access-kqfwd\") pod \"must-gather-vnjmk\" (UID: \"c05540c0-c586-4204-988e-1d8e2a23bae6\") " pod="openshift-must-gather-tfhkk/must-gather-vnjmk" Jan 31 05:04:20 crc kubenswrapper[4827]: I0131 05:04:20.275413 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c05540c0-c586-4204-988e-1d8e2a23bae6-must-gather-output\") pod \"must-gather-vnjmk\" (UID: \"c05540c0-c586-4204-988e-1d8e2a23bae6\") " pod="openshift-must-gather-tfhkk/must-gather-vnjmk" Jan 31 05:04:20 crc kubenswrapper[4827]: I0131 05:04:20.588752 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqfwd\" (UniqueName: \"kubernetes.io/projected/c05540c0-c586-4204-988e-1d8e2a23bae6-kube-api-access-kqfwd\") pod \"must-gather-vnjmk\" (UID: \"c05540c0-c586-4204-988e-1d8e2a23bae6\") " pod="openshift-must-gather-tfhkk/must-gather-vnjmk" Jan 31 05:04:20 crc kubenswrapper[4827]: I0131 05:04:20.650525 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfhkk/must-gather-vnjmk" Jan 31 05:04:21 crc kubenswrapper[4827]: I0131 05:04:21.095128 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tfhkk/must-gather-vnjmk"] Jan 31 05:04:22 crc kubenswrapper[4827]: I0131 05:04:22.133704 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfhkk/must-gather-vnjmk" event={"ID":"c05540c0-c586-4204-988e-1d8e2a23bae6","Type":"ContainerStarted","Data":"5902624d24a14ce0b56af1294529df1afacda7f72303df462981ee67eaadb5af"} Jan 31 05:04:26 crc kubenswrapper[4827]: I0131 05:04:26.153551 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfhkk/must-gather-vnjmk" event={"ID":"c05540c0-c586-4204-988e-1d8e2a23bae6","Type":"ContainerStarted","Data":"7ce5705fbdd2a4c3543efe9d4a26a6a28159307dd987f002e3f9cbfacf0626eb"} Jan 31 05:04:26 crc kubenswrapper[4827]: I0131 05:04:26.154099 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfhkk/must-gather-vnjmk" event={"ID":"c05540c0-c586-4204-988e-1d8e2a23bae6","Type":"ContainerStarted","Data":"4bc76f2204eeef43d15897ab141f767eb8e5948f318da858f970be14e6a08e21"} Jan 31 05:04:26 crc kubenswrapper[4827]: I0131 05:04:26.181013 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tfhkk/must-gather-vnjmk" podStartSLOduration=2.400076589 podStartE2EDuration="6.180995854s" podCreationTimestamp="2026-01-31 05:04:20 +0000 UTC" firstStartedPulling="2026-01-31 05:04:21.12128786 +0000 UTC m=+4653.808368319" lastFinishedPulling="2026-01-31 05:04:24.902207135 +0000 UTC m=+4657.589287584" observedRunningTime="2026-01-31 05:04:26.174801679 +0000 UTC m=+4658.861882168" watchObservedRunningTime="2026-01-31 05:04:26.180995854 +0000 UTC m=+4658.868076303" Jan 31 05:04:30 crc kubenswrapper[4827]: I0131 05:04:30.493802 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tfhkk/crc-debug-pcn5q"] Jan 31 05:04:30 crc kubenswrapper[4827]: I0131 05:04:30.495322 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfhkk/crc-debug-pcn5q" Jan 31 05:04:30 crc kubenswrapper[4827]: I0131 05:04:30.552407 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkf75\" (UniqueName: \"kubernetes.io/projected/32d39df1-5ebb-4468-b6ed-e107ecdf1542-kube-api-access-dkf75\") pod \"crc-debug-pcn5q\" (UID: \"32d39df1-5ebb-4468-b6ed-e107ecdf1542\") " pod="openshift-must-gather-tfhkk/crc-debug-pcn5q" Jan 31 05:04:30 crc kubenswrapper[4827]: I0131 05:04:30.552560 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32d39df1-5ebb-4468-b6ed-e107ecdf1542-host\") pod \"crc-debug-pcn5q\" (UID: \"32d39df1-5ebb-4468-b6ed-e107ecdf1542\") " pod="openshift-must-gather-tfhkk/crc-debug-pcn5q" Jan 31 05:04:30 crc kubenswrapper[4827]: I0131 05:04:30.654463 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32d39df1-5ebb-4468-b6ed-e107ecdf1542-host\") pod \"crc-debug-pcn5q\" (UID: \"32d39df1-5ebb-4468-b6ed-e107ecdf1542\") " pod="openshift-must-gather-tfhkk/crc-debug-pcn5q" Jan 31 05:04:30 crc kubenswrapper[4827]: I0131 05:04:30.654541 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkf75\" (UniqueName: \"kubernetes.io/projected/32d39df1-5ebb-4468-b6ed-e107ecdf1542-kube-api-access-dkf75\") pod \"crc-debug-pcn5q\" (UID: \"32d39df1-5ebb-4468-b6ed-e107ecdf1542\") " pod="openshift-must-gather-tfhkk/crc-debug-pcn5q" Jan 31 05:04:30 crc kubenswrapper[4827]: I0131 05:04:30.654647 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32d39df1-5ebb-4468-b6ed-e107ecdf1542-host\") pod \"crc-debug-pcn5q\" (UID: \"32d39df1-5ebb-4468-b6ed-e107ecdf1542\") " pod="openshift-must-gather-tfhkk/crc-debug-pcn5q" Jan 31 05:04:30 crc kubenswrapper[4827]: I0131 05:04:30.686811 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkf75\" (UniqueName: \"kubernetes.io/projected/32d39df1-5ebb-4468-b6ed-e107ecdf1542-kube-api-access-dkf75\") pod \"crc-debug-pcn5q\" (UID: \"32d39df1-5ebb-4468-b6ed-e107ecdf1542\") " pod="openshift-must-gather-tfhkk/crc-debug-pcn5q" Jan 31 05:04:30 crc kubenswrapper[4827]: I0131 05:04:30.817019 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfhkk/crc-debug-pcn5q" Jan 31 05:04:31 crc kubenswrapper[4827]: I0131 05:04:31.200289 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfhkk/crc-debug-pcn5q" event={"ID":"32d39df1-5ebb-4468-b6ed-e107ecdf1542","Type":"ContainerStarted","Data":"2e3e56afdf382d2bc113782f65f6e0b477dbd198bbf8e68de7ea99759bef055c"} Jan 31 05:04:41 crc kubenswrapper[4827]: I0131 05:04:41.297026 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfhkk/crc-debug-pcn5q" event={"ID":"32d39df1-5ebb-4468-b6ed-e107ecdf1542","Type":"ContainerStarted","Data":"f2842b823857e429033bd1309bfcde7d561340ec8c746e2c5a9baec5b75ca55a"} Jan 31 05:04:41 crc kubenswrapper[4827]: I0131 05:04:41.321499 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tfhkk/crc-debug-pcn5q" podStartSLOduration=1.285101077 podStartE2EDuration="11.321481926s" podCreationTimestamp="2026-01-31 05:04:30 +0000 UTC" firstStartedPulling="2026-01-31 05:04:30.866993426 +0000 UTC m=+4663.554073875" lastFinishedPulling="2026-01-31 05:04:40.903374275 +0000 UTC m=+4673.590454724" observedRunningTime="2026-01-31 05:04:41.314568599 +0000 UTC m=+4674.001649078" watchObservedRunningTime="2026-01-31 05:04:41.321481926 +0000 UTC m=+4674.008562375" Jan 31 05:05:28 crc kubenswrapper[4827]: I0131 05:05:28.680392 4827 generic.go:334] "Generic (PLEG): container finished" podID="32d39df1-5ebb-4468-b6ed-e107ecdf1542" containerID="f2842b823857e429033bd1309bfcde7d561340ec8c746e2c5a9baec5b75ca55a" exitCode=0 Jan 31 05:05:28 crc kubenswrapper[4827]: I0131 05:05:28.680466 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfhkk/crc-debug-pcn5q" event={"ID":"32d39df1-5ebb-4468-b6ed-e107ecdf1542","Type":"ContainerDied","Data":"f2842b823857e429033bd1309bfcde7d561340ec8c746e2c5a9baec5b75ca55a"} Jan 31 05:05:29 crc kubenswrapper[4827]: I0131 05:05:29.806343 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfhkk/crc-debug-pcn5q" Jan 31 05:05:29 crc kubenswrapper[4827]: I0131 05:05:29.849100 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkf75\" (UniqueName: \"kubernetes.io/projected/32d39df1-5ebb-4468-b6ed-e107ecdf1542-kube-api-access-dkf75\") pod \"32d39df1-5ebb-4468-b6ed-e107ecdf1542\" (UID: \"32d39df1-5ebb-4468-b6ed-e107ecdf1542\") " Jan 31 05:05:29 crc kubenswrapper[4827]: I0131 05:05:29.849244 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32d39df1-5ebb-4468-b6ed-e107ecdf1542-host\") pod \"32d39df1-5ebb-4468-b6ed-e107ecdf1542\" (UID: \"32d39df1-5ebb-4468-b6ed-e107ecdf1542\") " Jan 31 05:05:29 crc kubenswrapper[4827]: I0131 05:05:29.849397 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32d39df1-5ebb-4468-b6ed-e107ecdf1542-host" (OuterVolumeSpecName: "host") pod "32d39df1-5ebb-4468-b6ed-e107ecdf1542" (UID: "32d39df1-5ebb-4468-b6ed-e107ecdf1542"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:05:29 crc kubenswrapper[4827]: I0131 05:05:29.849680 4827 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32d39df1-5ebb-4468-b6ed-e107ecdf1542-host\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:29 crc kubenswrapper[4827]: I0131 05:05:29.854497 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tfhkk/crc-debug-pcn5q"] Jan 31 05:05:29 crc kubenswrapper[4827]: I0131 05:05:29.859298 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32d39df1-5ebb-4468-b6ed-e107ecdf1542-kube-api-access-dkf75" (OuterVolumeSpecName: "kube-api-access-dkf75") pod "32d39df1-5ebb-4468-b6ed-e107ecdf1542" (UID: "32d39df1-5ebb-4468-b6ed-e107ecdf1542"). InnerVolumeSpecName "kube-api-access-dkf75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:05:29 crc kubenswrapper[4827]: I0131 05:05:29.863474 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tfhkk/crc-debug-pcn5q"] Jan 31 05:05:29 crc kubenswrapper[4827]: I0131 05:05:29.952078 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkf75\" (UniqueName: \"kubernetes.io/projected/32d39df1-5ebb-4468-b6ed-e107ecdf1542-kube-api-access-dkf75\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:30 crc kubenswrapper[4827]: I0131 05:05:30.126633 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32d39df1-5ebb-4468-b6ed-e107ecdf1542" path="/var/lib/kubelet/pods/32d39df1-5ebb-4468-b6ed-e107ecdf1542/volumes" Jan 31 05:05:30 crc kubenswrapper[4827]: I0131 05:05:30.701738 4827 scope.go:117] "RemoveContainer" containerID="f2842b823857e429033bd1309bfcde7d561340ec8c746e2c5a9baec5b75ca55a" Jan 31 05:05:30 crc kubenswrapper[4827]: I0131 05:05:30.701848 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfhkk/crc-debug-pcn5q" Jan 31 05:05:31 crc kubenswrapper[4827]: I0131 05:05:31.046423 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tfhkk/crc-debug-7z2j8"] Jan 31 05:05:31 crc kubenswrapper[4827]: E0131 05:05:31.046910 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d39df1-5ebb-4468-b6ed-e107ecdf1542" containerName="container-00" Jan 31 05:05:31 crc kubenswrapper[4827]: I0131 05:05:31.046931 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d39df1-5ebb-4468-b6ed-e107ecdf1542" containerName="container-00" Jan 31 05:05:31 crc kubenswrapper[4827]: I0131 05:05:31.047205 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="32d39df1-5ebb-4468-b6ed-e107ecdf1542" containerName="container-00" Jan 31 05:05:31 crc kubenswrapper[4827]: I0131 05:05:31.047983 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfhkk/crc-debug-7z2j8" Jan 31 05:05:31 crc kubenswrapper[4827]: I0131 05:05:31.075624 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c55e23b-2feb-45f5-866b-aa0622b29a8d-host\") pod \"crc-debug-7z2j8\" (UID: \"5c55e23b-2feb-45f5-866b-aa0622b29a8d\") " pod="openshift-must-gather-tfhkk/crc-debug-7z2j8" Jan 31 05:05:31 crc kubenswrapper[4827]: I0131 05:05:31.075803 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phc7v\" (UniqueName: \"kubernetes.io/projected/5c55e23b-2feb-45f5-866b-aa0622b29a8d-kube-api-access-phc7v\") pod \"crc-debug-7z2j8\" (UID: \"5c55e23b-2feb-45f5-866b-aa0622b29a8d\") " pod="openshift-must-gather-tfhkk/crc-debug-7z2j8" Jan 31 05:05:31 crc kubenswrapper[4827]: I0131 05:05:31.177694 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phc7v\" (UniqueName: \"kubernetes.io/projected/5c55e23b-2feb-45f5-866b-aa0622b29a8d-kube-api-access-phc7v\") pod \"crc-debug-7z2j8\" (UID: \"5c55e23b-2feb-45f5-866b-aa0622b29a8d\") " pod="openshift-must-gather-tfhkk/crc-debug-7z2j8" Jan 31 05:05:31 crc kubenswrapper[4827]: I0131 05:05:31.177794 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c55e23b-2feb-45f5-866b-aa0622b29a8d-host\") pod \"crc-debug-7z2j8\" (UID: \"5c55e23b-2feb-45f5-866b-aa0622b29a8d\") " pod="openshift-must-gather-tfhkk/crc-debug-7z2j8" Jan 31 05:05:31 crc kubenswrapper[4827]: I0131 05:05:31.177952 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c55e23b-2feb-45f5-866b-aa0622b29a8d-host\") pod \"crc-debug-7z2j8\" (UID: \"5c55e23b-2feb-45f5-866b-aa0622b29a8d\") " pod="openshift-must-gather-tfhkk/crc-debug-7z2j8" Jan 31 05:05:31 crc kubenswrapper[4827]: I0131 05:05:31.195306 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phc7v\" (UniqueName: \"kubernetes.io/projected/5c55e23b-2feb-45f5-866b-aa0622b29a8d-kube-api-access-phc7v\") pod \"crc-debug-7z2j8\" (UID: \"5c55e23b-2feb-45f5-866b-aa0622b29a8d\") " pod="openshift-must-gather-tfhkk/crc-debug-7z2j8" Jan 31 05:05:31 crc kubenswrapper[4827]: I0131 05:05:31.362866 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfhkk/crc-debug-7z2j8" Jan 31 05:05:31 crc kubenswrapper[4827]: I0131 05:05:31.709253 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfhkk/crc-debug-7z2j8" event={"ID":"5c55e23b-2feb-45f5-866b-aa0622b29a8d","Type":"ContainerStarted","Data":"cc46c1d253085dfb697e7628a56a510cc53bc2059e29092a5521a8b085b3f416"} Jan 31 05:05:31 crc kubenswrapper[4827]: I0131 05:05:31.709600 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfhkk/crc-debug-7z2j8" event={"ID":"5c55e23b-2feb-45f5-866b-aa0622b29a8d","Type":"ContainerStarted","Data":"e57927dee55ab300fcec67f5a0f421fa7d6ac89c5f79b2c929b5ef2166265537"} Jan 31 05:05:31 crc kubenswrapper[4827]: I0131 05:05:31.728521 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tfhkk/crc-debug-7z2j8" podStartSLOduration=0.728501871 podStartE2EDuration="728.501871ms" podCreationTimestamp="2026-01-31 05:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:05:31.725346163 +0000 UTC m=+4724.412426652" watchObservedRunningTime="2026-01-31 05:05:31.728501871 +0000 UTC m=+4724.415582320" Jan 31 05:05:32 crc kubenswrapper[4827]: I0131 05:05:32.723085 4827 generic.go:334] "Generic (PLEG): container finished" podID="5c55e23b-2feb-45f5-866b-aa0622b29a8d" containerID="cc46c1d253085dfb697e7628a56a510cc53bc2059e29092a5521a8b085b3f416" exitCode=0 Jan 31 05:05:32 crc kubenswrapper[4827]: I0131 05:05:32.723147 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfhkk/crc-debug-7z2j8" event={"ID":"5c55e23b-2feb-45f5-866b-aa0622b29a8d","Type":"ContainerDied","Data":"cc46c1d253085dfb697e7628a56a510cc53bc2059e29092a5521a8b085b3f416"} Jan 31 05:05:33 crc kubenswrapper[4827]: I0131 05:05:33.849416 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfhkk/crc-debug-7z2j8" Jan 31 05:05:33 crc kubenswrapper[4827]: I0131 05:05:33.922080 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c55e23b-2feb-45f5-866b-aa0622b29a8d-host\") pod \"5c55e23b-2feb-45f5-866b-aa0622b29a8d\" (UID: \"5c55e23b-2feb-45f5-866b-aa0622b29a8d\") " Jan 31 05:05:33 crc kubenswrapper[4827]: I0131 05:05:33.922168 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phc7v\" (UniqueName: \"kubernetes.io/projected/5c55e23b-2feb-45f5-866b-aa0622b29a8d-kube-api-access-phc7v\") pod \"5c55e23b-2feb-45f5-866b-aa0622b29a8d\" (UID: \"5c55e23b-2feb-45f5-866b-aa0622b29a8d\") " Jan 31 05:05:33 crc kubenswrapper[4827]: I0131 05:05:33.922382 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c55e23b-2feb-45f5-866b-aa0622b29a8d-host" (OuterVolumeSpecName: "host") pod "5c55e23b-2feb-45f5-866b-aa0622b29a8d" (UID: "5c55e23b-2feb-45f5-866b-aa0622b29a8d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:05:33 crc kubenswrapper[4827]: I0131 05:05:33.922873 4827 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c55e23b-2feb-45f5-866b-aa0622b29a8d-host\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:33 crc kubenswrapper[4827]: I0131 05:05:33.928111 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c55e23b-2feb-45f5-866b-aa0622b29a8d-kube-api-access-phc7v" (OuterVolumeSpecName: "kube-api-access-phc7v") pod "5c55e23b-2feb-45f5-866b-aa0622b29a8d" (UID: "5c55e23b-2feb-45f5-866b-aa0622b29a8d"). InnerVolumeSpecName "kube-api-access-phc7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:05:34 crc kubenswrapper[4827]: I0131 05:05:34.024554 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phc7v\" (UniqueName: \"kubernetes.io/projected/5c55e23b-2feb-45f5-866b-aa0622b29a8d-kube-api-access-phc7v\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:34 crc kubenswrapper[4827]: I0131 05:05:34.057767 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tfhkk/crc-debug-7z2j8"] Jan 31 05:05:34 crc kubenswrapper[4827]: I0131 05:05:34.068226 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tfhkk/crc-debug-7z2j8"] Jan 31 05:05:34 crc kubenswrapper[4827]: I0131 05:05:34.120961 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c55e23b-2feb-45f5-866b-aa0622b29a8d" path="/var/lib/kubelet/pods/5c55e23b-2feb-45f5-866b-aa0622b29a8d/volumes" Jan 31 05:05:34 crc kubenswrapper[4827]: I0131 05:05:34.745599 4827 scope.go:117] "RemoveContainer" containerID="cc46c1d253085dfb697e7628a56a510cc53bc2059e29092a5521a8b085b3f416" Jan 31 05:05:34 crc kubenswrapper[4827]: I0131 05:05:34.745652 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfhkk/crc-debug-7z2j8" Jan 31 05:05:35 crc kubenswrapper[4827]: I0131 05:05:35.258733 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tfhkk/crc-debug-cp8bt"] Jan 31 05:05:35 crc kubenswrapper[4827]: E0131 05:05:35.259301 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c55e23b-2feb-45f5-866b-aa0622b29a8d" containerName="container-00" Jan 31 05:05:35 crc kubenswrapper[4827]: I0131 05:05:35.259320 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c55e23b-2feb-45f5-866b-aa0622b29a8d" containerName="container-00" Jan 31 05:05:35 crc kubenswrapper[4827]: I0131 05:05:35.259666 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c55e23b-2feb-45f5-866b-aa0622b29a8d" containerName="container-00" Jan 31 05:05:35 crc kubenswrapper[4827]: I0131 05:05:35.260683 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfhkk/crc-debug-cp8bt" Jan 31 05:05:35 crc kubenswrapper[4827]: I0131 05:05:35.353027 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0431cc8e-675c-4cb5-8e39-27e7ed72e614-host\") pod \"crc-debug-cp8bt\" (UID: \"0431cc8e-675c-4cb5-8e39-27e7ed72e614\") " pod="openshift-must-gather-tfhkk/crc-debug-cp8bt" Jan 31 05:05:35 crc kubenswrapper[4827]: I0131 05:05:35.353725 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v8k7\" (UniqueName: \"kubernetes.io/projected/0431cc8e-675c-4cb5-8e39-27e7ed72e614-kube-api-access-4v8k7\") pod \"crc-debug-cp8bt\" (UID: \"0431cc8e-675c-4cb5-8e39-27e7ed72e614\") " pod="openshift-must-gather-tfhkk/crc-debug-cp8bt" Jan 31 05:05:35 crc kubenswrapper[4827]: I0131 05:05:35.456784 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v8k7\" (UniqueName: \"kubernetes.io/projected/0431cc8e-675c-4cb5-8e39-27e7ed72e614-kube-api-access-4v8k7\") pod \"crc-debug-cp8bt\" (UID: \"0431cc8e-675c-4cb5-8e39-27e7ed72e614\") " pod="openshift-must-gather-tfhkk/crc-debug-cp8bt" Jan 31 05:05:35 crc kubenswrapper[4827]: I0131 05:05:35.456968 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0431cc8e-675c-4cb5-8e39-27e7ed72e614-host\") pod \"crc-debug-cp8bt\" (UID: \"0431cc8e-675c-4cb5-8e39-27e7ed72e614\") " pod="openshift-must-gather-tfhkk/crc-debug-cp8bt" Jan 31 05:05:35 crc kubenswrapper[4827]: I0131 05:05:35.457186 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0431cc8e-675c-4cb5-8e39-27e7ed72e614-host\") pod \"crc-debug-cp8bt\" (UID: \"0431cc8e-675c-4cb5-8e39-27e7ed72e614\") " pod="openshift-must-gather-tfhkk/crc-debug-cp8bt" Jan 31 05:05:35 crc kubenswrapper[4827]: I0131 05:05:35.486634 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v8k7\" (UniqueName: \"kubernetes.io/projected/0431cc8e-675c-4cb5-8e39-27e7ed72e614-kube-api-access-4v8k7\") pod \"crc-debug-cp8bt\" (UID: \"0431cc8e-675c-4cb5-8e39-27e7ed72e614\") " pod="openshift-must-gather-tfhkk/crc-debug-cp8bt" Jan 31 05:05:35 crc kubenswrapper[4827]: I0131 05:05:35.589055 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfhkk/crc-debug-cp8bt" Jan 31 05:05:36 crc kubenswrapper[4827]: I0131 05:05:36.767312 4827 generic.go:334] "Generic (PLEG): container finished" podID="0431cc8e-675c-4cb5-8e39-27e7ed72e614" containerID="0f8376bb689d099caec085a2ddcebb0d72fb677dffae5fa5fe177579e9d547f8" exitCode=0 Jan 31 05:05:36 crc kubenswrapper[4827]: I0131 05:05:36.767425 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfhkk/crc-debug-cp8bt" event={"ID":"0431cc8e-675c-4cb5-8e39-27e7ed72e614","Type":"ContainerDied","Data":"0f8376bb689d099caec085a2ddcebb0d72fb677dffae5fa5fe177579e9d547f8"} Jan 31 05:05:36 crc kubenswrapper[4827]: I0131 05:05:36.767581 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfhkk/crc-debug-cp8bt" event={"ID":"0431cc8e-675c-4cb5-8e39-27e7ed72e614","Type":"ContainerStarted","Data":"3a5a68c113808593db55b84c5a8d78b0531c403a416a10ec994b021452170b0b"} Jan 31 05:05:36 crc kubenswrapper[4827]: I0131 05:05:36.864152 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tfhkk/crc-debug-cp8bt"] Jan 31 05:05:36 crc kubenswrapper[4827]: I0131 05:05:36.883609 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tfhkk/crc-debug-cp8bt"] Jan 31 05:05:37 crc kubenswrapper[4827]: I0131 05:05:37.875994 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfhkk/crc-debug-cp8bt" Jan 31 05:05:37 crc kubenswrapper[4827]: I0131 05:05:37.908636 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0431cc8e-675c-4cb5-8e39-27e7ed72e614-host\") pod \"0431cc8e-675c-4cb5-8e39-27e7ed72e614\" (UID: \"0431cc8e-675c-4cb5-8e39-27e7ed72e614\") " Jan 31 05:05:37 crc kubenswrapper[4827]: I0131 05:05:37.908791 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0431cc8e-675c-4cb5-8e39-27e7ed72e614-host" (OuterVolumeSpecName: "host") pod "0431cc8e-675c-4cb5-8e39-27e7ed72e614" (UID: "0431cc8e-675c-4cb5-8e39-27e7ed72e614"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:05:37 crc kubenswrapper[4827]: I0131 05:05:37.908872 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v8k7\" (UniqueName: \"kubernetes.io/projected/0431cc8e-675c-4cb5-8e39-27e7ed72e614-kube-api-access-4v8k7\") pod \"0431cc8e-675c-4cb5-8e39-27e7ed72e614\" (UID: \"0431cc8e-675c-4cb5-8e39-27e7ed72e614\") " Jan 31 05:05:37 crc kubenswrapper[4827]: I0131 05:05:37.909699 4827 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0431cc8e-675c-4cb5-8e39-27e7ed72e614-host\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:37 crc kubenswrapper[4827]: I0131 05:05:37.956758 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0431cc8e-675c-4cb5-8e39-27e7ed72e614-kube-api-access-4v8k7" (OuterVolumeSpecName: "kube-api-access-4v8k7") pod "0431cc8e-675c-4cb5-8e39-27e7ed72e614" (UID: "0431cc8e-675c-4cb5-8e39-27e7ed72e614"). InnerVolumeSpecName "kube-api-access-4v8k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:05:38 crc kubenswrapper[4827]: I0131 05:05:38.011460 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v8k7\" (UniqueName: \"kubernetes.io/projected/0431cc8e-675c-4cb5-8e39-27e7ed72e614-kube-api-access-4v8k7\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:38 crc kubenswrapper[4827]: I0131 05:05:38.125452 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0431cc8e-675c-4cb5-8e39-27e7ed72e614" path="/var/lib/kubelet/pods/0431cc8e-675c-4cb5-8e39-27e7ed72e614/volumes" Jan 31 05:05:38 crc kubenswrapper[4827]: I0131 05:05:38.786072 4827 scope.go:117] "RemoveContainer" containerID="0f8376bb689d099caec085a2ddcebb0d72fb677dffae5fa5fe177579e9d547f8" Jan 31 05:05:38 crc kubenswrapper[4827]: I0131 05:05:38.786201 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfhkk/crc-debug-cp8bt" Jan 31 05:05:52 crc kubenswrapper[4827]: I0131 05:05:52.354475 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bbzj8"] Jan 31 05:05:52 crc kubenswrapper[4827]: E0131 05:05:52.357612 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0431cc8e-675c-4cb5-8e39-27e7ed72e614" containerName="container-00" Jan 31 05:05:52 crc kubenswrapper[4827]: I0131 05:05:52.357659 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="0431cc8e-675c-4cb5-8e39-27e7ed72e614" containerName="container-00" Jan 31 05:05:52 crc kubenswrapper[4827]: I0131 05:05:52.358012 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="0431cc8e-675c-4cb5-8e39-27e7ed72e614" containerName="container-00" Jan 31 05:05:52 crc kubenswrapper[4827]: I0131 05:05:52.360336 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbzj8" Jan 31 05:05:52 crc kubenswrapper[4827]: I0131 05:05:52.373067 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bbzj8"] Jan 31 05:05:52 crc kubenswrapper[4827]: I0131 05:05:52.456348 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/853f00cc-27e7-4c7f-9582-19e5e1783126-catalog-content\") pod \"redhat-operators-bbzj8\" (UID: \"853f00cc-27e7-4c7f-9582-19e5e1783126\") " pod="openshift-marketplace/redhat-operators-bbzj8" Jan 31 05:05:52 crc kubenswrapper[4827]: I0131 05:05:52.456479 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/853f00cc-27e7-4c7f-9582-19e5e1783126-utilities\") pod \"redhat-operators-bbzj8\" (UID: \"853f00cc-27e7-4c7f-9582-19e5e1783126\") " pod="openshift-marketplace/redhat-operators-bbzj8" Jan 31 05:05:52 crc kubenswrapper[4827]: I0131 05:05:52.456521 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8m6n\" (UniqueName: \"kubernetes.io/projected/853f00cc-27e7-4c7f-9582-19e5e1783126-kube-api-access-l8m6n\") pod \"redhat-operators-bbzj8\" (UID: \"853f00cc-27e7-4c7f-9582-19e5e1783126\") " pod="openshift-marketplace/redhat-operators-bbzj8" Jan 31 05:05:52 crc kubenswrapper[4827]: I0131 05:05:52.558086 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/853f00cc-27e7-4c7f-9582-19e5e1783126-utilities\") pod \"redhat-operators-bbzj8\" (UID: \"853f00cc-27e7-4c7f-9582-19e5e1783126\") " pod="openshift-marketplace/redhat-operators-bbzj8" Jan 31 05:05:52 crc kubenswrapper[4827]: I0131 05:05:52.558168 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8m6n\" (UniqueName: \"kubernetes.io/projected/853f00cc-27e7-4c7f-9582-19e5e1783126-kube-api-access-l8m6n\") pod \"redhat-operators-bbzj8\" (UID: \"853f00cc-27e7-4c7f-9582-19e5e1783126\") " pod="openshift-marketplace/redhat-operators-bbzj8" Jan 31 05:05:52 crc kubenswrapper[4827]: I0131 05:05:52.558272 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/853f00cc-27e7-4c7f-9582-19e5e1783126-catalog-content\") pod \"redhat-operators-bbzj8\" (UID: \"853f00cc-27e7-4c7f-9582-19e5e1783126\") " pod="openshift-marketplace/redhat-operators-bbzj8" Jan 31 05:05:52 crc kubenswrapper[4827]: I0131 05:05:52.558656 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/853f00cc-27e7-4c7f-9582-19e5e1783126-utilities\") pod \"redhat-operators-bbzj8\" (UID: \"853f00cc-27e7-4c7f-9582-19e5e1783126\") " pod="openshift-marketplace/redhat-operators-bbzj8" Jan 31 05:05:52 crc kubenswrapper[4827]: I0131 05:05:52.558714 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/853f00cc-27e7-4c7f-9582-19e5e1783126-catalog-content\") pod \"redhat-operators-bbzj8\" (UID: \"853f00cc-27e7-4c7f-9582-19e5e1783126\") " pod="openshift-marketplace/redhat-operators-bbzj8" Jan 31 05:05:52 crc kubenswrapper[4827]: I0131 05:05:52.585193 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8m6n\" (UniqueName: \"kubernetes.io/projected/853f00cc-27e7-4c7f-9582-19e5e1783126-kube-api-access-l8m6n\") pod \"redhat-operators-bbzj8\" (UID: \"853f00cc-27e7-4c7f-9582-19e5e1783126\") " pod="openshift-marketplace/redhat-operators-bbzj8" Jan 31 05:05:52 crc kubenswrapper[4827]: I0131 05:05:52.680852 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbzj8" Jan 31 05:05:53 crc kubenswrapper[4827]: I0131 05:05:53.224847 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bbzj8"] Jan 31 05:05:53 crc kubenswrapper[4827]: I0131 05:05:53.959913 4827 generic.go:334] "Generic (PLEG): container finished" podID="853f00cc-27e7-4c7f-9582-19e5e1783126" containerID="16e6c713939356455dc4d447f3d08b1d2c56ddab9087c3ef7d22aac32af950fc" exitCode=0 Jan 31 05:05:53 crc kubenswrapper[4827]: I0131 05:05:53.959980 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbzj8" event={"ID":"853f00cc-27e7-4c7f-9582-19e5e1783126","Type":"ContainerDied","Data":"16e6c713939356455dc4d447f3d08b1d2c56ddab9087c3ef7d22aac32af950fc"} Jan 31 05:05:53 crc kubenswrapper[4827]: I0131 05:05:53.960173 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbzj8" event={"ID":"853f00cc-27e7-4c7f-9582-19e5e1783126","Type":"ContainerStarted","Data":"b71ccdb02e3dd0e41cad8b8bdeb6699b262db54f8ed11b143f63e3615f3a707c"} Jan 31 05:05:54 crc kubenswrapper[4827]: I0131 05:05:54.969939 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbzj8" event={"ID":"853f00cc-27e7-4c7f-9582-19e5e1783126","Type":"ContainerStarted","Data":"547a1dee87ebaea87e698477a52b5b33dc0b5834829063db7628676b6b45f5e7"} Jan 31 05:06:00 crc kubenswrapper[4827]: I0131 05:06:00.019983 4827 generic.go:334] "Generic (PLEG): container finished" podID="853f00cc-27e7-4c7f-9582-19e5e1783126" containerID="547a1dee87ebaea87e698477a52b5b33dc0b5834829063db7628676b6b45f5e7" exitCode=0 Jan 31 05:06:00 crc kubenswrapper[4827]: I0131 05:06:00.020070 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbzj8" event={"ID":"853f00cc-27e7-4c7f-9582-19e5e1783126","Type":"ContainerDied","Data":"547a1dee87ebaea87e698477a52b5b33dc0b5834829063db7628676b6b45f5e7"} Jan 31 05:06:01 crc kubenswrapper[4827]: I0131 05:06:01.033551 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbzj8" event={"ID":"853f00cc-27e7-4c7f-9582-19e5e1783126","Type":"ContainerStarted","Data":"15545e790592a28956be1fab9ce70c4aea57294b159f7217e3e3886958f354e8"} Jan 31 05:06:01 crc kubenswrapper[4827]: I0131 05:06:01.057038 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bbzj8" podStartSLOduration=2.470794195 podStartE2EDuration="9.057022401s" podCreationTimestamp="2026-01-31 05:05:52 +0000 UTC" firstStartedPulling="2026-01-31 05:05:53.961874344 +0000 UTC m=+4746.648954793" lastFinishedPulling="2026-01-31 05:06:00.54810255 +0000 UTC m=+4753.235182999" observedRunningTime="2026-01-31 05:06:01.053096699 +0000 UTC m=+4753.740177148" watchObservedRunningTime="2026-01-31 05:06:01.057022401 +0000 UTC m=+4753.744102850" Jan 31 05:06:02 crc kubenswrapper[4827]: I0131 05:06:02.680947 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bbzj8" Jan 31 05:06:02 crc kubenswrapper[4827]: I0131 05:06:02.681257 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bbzj8" Jan 31 05:06:03 crc kubenswrapper[4827]: I0131 05:06:03.728976 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbzj8" podUID="853f00cc-27e7-4c7f-9582-19e5e1783126" containerName="registry-server" probeResult="failure" output=< Jan 31 05:06:03 crc kubenswrapper[4827]: timeout: failed to connect service ":50051" within 1s Jan 31 05:06:03 crc kubenswrapper[4827]: > Jan 31 05:06:12 crc kubenswrapper[4827]: I0131 05:06:12.727909 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bbzj8" Jan 31 05:06:12 crc kubenswrapper[4827]: I0131 05:06:12.793522 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bbzj8" Jan 31 05:06:12 crc kubenswrapper[4827]: I0131 05:06:12.951439 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-74bf46887d-nb5df_d5530571-a0ae-4835-809e-0dab61573e8c/barbican-api/0.log" Jan 31 05:06:12 crc kubenswrapper[4827]: I0131 05:06:12.963405 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bbzj8"] Jan 31 05:06:13 crc kubenswrapper[4827]: I0131 05:06:13.984997 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-74bf46887d-nb5df_d5530571-a0ae-4835-809e-0dab61573e8c/barbican-api-log/0.log" Jan 31 05:06:13 crc kubenswrapper[4827]: I0131 05:06:13.985265 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-649fbbf9d6-vkg84_ac1aadec-fcb7-428e-9020-e424c393f018/barbican-keystone-listener/0.log" Jan 31 05:06:14 crc kubenswrapper[4827]: I0131 05:06:14.165049 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bbzj8" podUID="853f00cc-27e7-4c7f-9582-19e5e1783126" containerName="registry-server" containerID="cri-o://15545e790592a28956be1fab9ce70c4aea57294b159f7217e3e3886958f354e8" gracePeriod=2 Jan 31 05:06:14 crc kubenswrapper[4827]: I0131 05:06:14.179747 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-649fbbf9d6-vkg84_ac1aadec-fcb7-428e-9020-e424c393f018/barbican-keystone-listener-log/0.log" Jan 31 05:06:14 crc kubenswrapper[4827]: I0131 05:06:14.229717 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7c79fbcb95-qrncz_e83ad5c9-edbb-4764-b932-52810f0f57ac/barbican-worker-log/0.log" Jan 31 05:06:14 crc kubenswrapper[4827]: I0131 05:06:14.246689 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7c79fbcb95-qrncz_e83ad5c9-edbb-4764-b932-52810f0f57ac/barbican-worker/0.log" Jan 31 05:06:14 crc kubenswrapper[4827]: I0131 05:06:14.488364 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm_fde30814-9dd3-4c47-b7b2-cda3221d27e6/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:06:14 crc kubenswrapper[4827]: I0131 05:06:14.543779 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8c177172-a833-49fa-8448-419a0891c926/ceilometer-central-agent/0.log" Jan 31 05:06:14 crc kubenswrapper[4827]: I0131 05:06:14.727026 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8c177172-a833-49fa-8448-419a0891c926/ceilometer-notification-agent/0.log" Jan 31 05:06:14 crc kubenswrapper[4827]: I0131 05:06:14.772489 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8c177172-a833-49fa-8448-419a0891c926/sg-core/0.log" Jan 31 05:06:14 crc kubenswrapper[4827]: I0131 05:06:14.783339 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbzj8" Jan 31 05:06:14 crc kubenswrapper[4827]: I0131 05:06:14.795074 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8c177172-a833-49fa-8448-419a0891c926/proxy-httpd/0.log" Jan 31 05:06:14 crc kubenswrapper[4827]: I0131 05:06:14.850982 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/853f00cc-27e7-4c7f-9582-19e5e1783126-catalog-content\") pod \"853f00cc-27e7-4c7f-9582-19e5e1783126\" (UID: \"853f00cc-27e7-4c7f-9582-19e5e1783126\") " Jan 31 05:06:14 crc kubenswrapper[4827]: I0131 05:06:14.851212 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8m6n\" (UniqueName: \"kubernetes.io/projected/853f00cc-27e7-4c7f-9582-19e5e1783126-kube-api-access-l8m6n\") pod \"853f00cc-27e7-4c7f-9582-19e5e1783126\" (UID: \"853f00cc-27e7-4c7f-9582-19e5e1783126\") " Jan 31 05:06:14 crc kubenswrapper[4827]: I0131 05:06:14.851300 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/853f00cc-27e7-4c7f-9582-19e5e1783126-utilities\") pod \"853f00cc-27e7-4c7f-9582-19e5e1783126\" (UID: \"853f00cc-27e7-4c7f-9582-19e5e1783126\") " Jan 31 05:06:14 crc kubenswrapper[4827]: I0131 05:06:14.852319 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/853f00cc-27e7-4c7f-9582-19e5e1783126-utilities" (OuterVolumeSpecName: "utilities") pod "853f00cc-27e7-4c7f-9582-19e5e1783126" (UID: "853f00cc-27e7-4c7f-9582-19e5e1783126"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:06:14 crc kubenswrapper[4827]: I0131 05:06:14.858092 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/853f00cc-27e7-4c7f-9582-19e5e1783126-kube-api-access-l8m6n" (OuterVolumeSpecName: "kube-api-access-l8m6n") pod "853f00cc-27e7-4c7f-9582-19e5e1783126" (UID: "853f00cc-27e7-4c7f-9582-19e5e1783126"). InnerVolumeSpecName "kube-api-access-l8m6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:06:14 crc kubenswrapper[4827]: I0131 05:06:14.954075 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8m6n\" (UniqueName: \"kubernetes.io/projected/853f00cc-27e7-4c7f-9582-19e5e1783126-kube-api-access-l8m6n\") on node \"crc\" DevicePath \"\"" Jan 31 05:06:14 crc kubenswrapper[4827]: I0131 05:06:14.954111 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/853f00cc-27e7-4c7f-9582-19e5e1783126-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:06:14 crc kubenswrapper[4827]: I0131 05:06:14.980459 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/853f00cc-27e7-4c7f-9582-19e5e1783126-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "853f00cc-27e7-4c7f-9582-19e5e1783126" (UID: "853f00cc-27e7-4c7f-9582-19e5e1783126"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:06:14 crc kubenswrapper[4827]: I0131 05:06:14.987295 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9_d1104797-b1ab-4987-9d02-b19197f94eb5/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:06:15 crc kubenswrapper[4827]: I0131 05:06:15.007917 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb_e8b7f56f-cdfd-483b-8759-e869bedfd461/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:06:15 crc kubenswrapper[4827]: I0131 05:06:15.055424 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/853f00cc-27e7-4c7f-9582-19e5e1783126-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:06:15 crc kubenswrapper[4827]: I0131 05:06:15.175188 4827 generic.go:334] "Generic (PLEG): container finished" podID="853f00cc-27e7-4c7f-9582-19e5e1783126" containerID="15545e790592a28956be1fab9ce70c4aea57294b159f7217e3e3886958f354e8" exitCode=0 Jan 31 05:06:15 crc kubenswrapper[4827]: I0131 05:06:15.175227 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbzj8" event={"ID":"853f00cc-27e7-4c7f-9582-19e5e1783126","Type":"ContainerDied","Data":"15545e790592a28956be1fab9ce70c4aea57294b159f7217e3e3886958f354e8"} Jan 31 05:06:15 crc kubenswrapper[4827]: I0131 05:06:15.175251 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbzj8" event={"ID":"853f00cc-27e7-4c7f-9582-19e5e1783126","Type":"ContainerDied","Data":"b71ccdb02e3dd0e41cad8b8bdeb6699b262db54f8ed11b143f63e3615f3a707c"} Jan 31 05:06:15 crc kubenswrapper[4827]: I0131 05:06:15.175266 4827 scope.go:117] "RemoveContainer" containerID="15545e790592a28956be1fab9ce70c4aea57294b159f7217e3e3886958f354e8" Jan 31 05:06:15 crc kubenswrapper[4827]: I0131 05:06:15.175299 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbzj8" Jan 31 05:06:15 crc kubenswrapper[4827]: I0131 05:06:15.203767 4827 scope.go:117] "RemoveContainer" containerID="547a1dee87ebaea87e698477a52b5b33dc0b5834829063db7628676b6b45f5e7" Jan 31 05:06:15 crc kubenswrapper[4827]: I0131 05:06:15.227245 4827 scope.go:117] "RemoveContainer" containerID="16e6c713939356455dc4d447f3d08b1d2c56ddab9087c3ef7d22aac32af950fc" Jan 31 05:06:15 crc kubenswrapper[4827]: I0131 05:06:15.234192 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bbzj8"] Jan 31 05:06:15 crc kubenswrapper[4827]: I0131 05:06:15.253642 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bbzj8"] Jan 31 05:06:15 crc kubenswrapper[4827]: I0131 05:06:15.282553 4827 scope.go:117] "RemoveContainer" containerID="15545e790592a28956be1fab9ce70c4aea57294b159f7217e3e3886958f354e8" Jan 31 05:06:15 crc kubenswrapper[4827]: E0131 05:06:15.286173 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15545e790592a28956be1fab9ce70c4aea57294b159f7217e3e3886958f354e8\": container with ID starting with 15545e790592a28956be1fab9ce70c4aea57294b159f7217e3e3886958f354e8 not found: ID does not exist" containerID="15545e790592a28956be1fab9ce70c4aea57294b159f7217e3e3886958f354e8" Jan 31 05:06:15 crc kubenswrapper[4827]: I0131 05:06:15.286208 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15545e790592a28956be1fab9ce70c4aea57294b159f7217e3e3886958f354e8"} err="failed to get container status \"15545e790592a28956be1fab9ce70c4aea57294b159f7217e3e3886958f354e8\": rpc error: code = NotFound desc = could not find container \"15545e790592a28956be1fab9ce70c4aea57294b159f7217e3e3886958f354e8\": container with ID starting with 15545e790592a28956be1fab9ce70c4aea57294b159f7217e3e3886958f354e8 not found: ID does not exist" Jan 31 05:06:15 crc kubenswrapper[4827]: I0131 05:06:15.286227 4827 scope.go:117] "RemoveContainer" containerID="547a1dee87ebaea87e698477a52b5b33dc0b5834829063db7628676b6b45f5e7" Jan 31 05:06:15 crc kubenswrapper[4827]: E0131 05:06:15.296625 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"547a1dee87ebaea87e698477a52b5b33dc0b5834829063db7628676b6b45f5e7\": container with ID starting with 547a1dee87ebaea87e698477a52b5b33dc0b5834829063db7628676b6b45f5e7 not found: ID does not exist" containerID="547a1dee87ebaea87e698477a52b5b33dc0b5834829063db7628676b6b45f5e7" Jan 31 05:06:15 crc kubenswrapper[4827]: I0131 05:06:15.296671 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"547a1dee87ebaea87e698477a52b5b33dc0b5834829063db7628676b6b45f5e7"} err="failed to get container status \"547a1dee87ebaea87e698477a52b5b33dc0b5834829063db7628676b6b45f5e7\": rpc error: code = NotFound desc = could not find container \"547a1dee87ebaea87e698477a52b5b33dc0b5834829063db7628676b6b45f5e7\": container with ID starting with 547a1dee87ebaea87e698477a52b5b33dc0b5834829063db7628676b6b45f5e7 not found: ID does not exist" Jan 31 05:06:15 crc kubenswrapper[4827]: I0131 05:06:15.296706 4827 scope.go:117] "RemoveContainer" containerID="16e6c713939356455dc4d447f3d08b1d2c56ddab9087c3ef7d22aac32af950fc" Jan 31 05:06:15 crc kubenswrapper[4827]: E0131 05:06:15.300447 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16e6c713939356455dc4d447f3d08b1d2c56ddab9087c3ef7d22aac32af950fc\": container with ID starting with 16e6c713939356455dc4d447f3d08b1d2c56ddab9087c3ef7d22aac32af950fc not found: ID does not exist" containerID="16e6c713939356455dc4d447f3d08b1d2c56ddab9087c3ef7d22aac32af950fc" Jan 31 05:06:15 crc kubenswrapper[4827]: I0131 05:06:15.300622 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16e6c713939356455dc4d447f3d08b1d2c56ddab9087c3ef7d22aac32af950fc"} err="failed to get container status \"16e6c713939356455dc4d447f3d08b1d2c56ddab9087c3ef7d22aac32af950fc\": rpc error: code = NotFound desc = could not find container \"16e6c713939356455dc4d447f3d08b1d2c56ddab9087c3ef7d22aac32af950fc\": container with ID starting with 16e6c713939356455dc4d447f3d08b1d2c56ddab9087c3ef7d22aac32af950fc not found: ID does not exist" Jan 31 05:06:15 crc kubenswrapper[4827]: I0131 05:06:15.606093 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_ceec568e-c3e2-4b44-b2f9-b90d9334667f/probe/0.log" Jan 31 05:06:15 crc kubenswrapper[4827]: I0131 05:06:15.907183 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5dc884e0-8eda-432c-a19f-2f1f4202ed2f/cinder-api-log/0.log" Jan 31 05:06:15 crc kubenswrapper[4827]: I0131 05:06:15.931736 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5dc884e0-8eda-432c-a19f-2f1f4202ed2f/cinder-api/0.log" Jan 31 05:06:16 crc kubenswrapper[4827]: I0131 05:06:16.124459 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="853f00cc-27e7-4c7f-9582-19e5e1783126" path="/var/lib/kubelet/pods/853f00cc-27e7-4c7f-9582-19e5e1783126/volumes" Jan 31 05:06:16 crc kubenswrapper[4827]: I0131 05:06:16.145447 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d3a82636-a800-49e2-b3f7-f253d069722c/cinder-scheduler/0.log" Jan 31 05:06:16 crc kubenswrapper[4827]: I0131 05:06:16.198400 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d3a82636-a800-49e2-b3f7-f253d069722c/probe/0.log" Jan 31 05:06:16 crc kubenswrapper[4827]: I0131 05:06:16.365393 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_7c359669-c94b-42d4-9b63-de6d4812e598/probe/0.log" Jan 31 05:06:16 crc kubenswrapper[4827]: I0131 05:06:16.462074 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-dn499_746484a7-e256-43ec-8a25-6d4ef96aa9e0/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:06:16 crc kubenswrapper[4827]: I0131 05:06:16.672217 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_ceec568e-c3e2-4b44-b2f9-b90d9334667f/cinder-backup/0.log" Jan 31 05:06:16 crc kubenswrapper[4827]: I0131 05:06:16.694200 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8_5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:06:16 crc kubenswrapper[4827]: I0131 05:06:16.878098 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-b4n6f_8921f642-11e6-4efc-9441-9f3ee68ed074/init/0.log" Jan 31 05:06:17 crc kubenswrapper[4827]: I0131 05:06:17.077010 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-b4n6f_8921f642-11e6-4efc-9441-9f3ee68ed074/init/0.log" Jan 31 05:06:17 crc kubenswrapper[4827]: I0131 05:06:17.116231 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c51b9a5d-c011-422c-8b29-39a9d4355659/glance-httpd/0.log" Jan 31 05:06:17 crc kubenswrapper[4827]: I0131 05:06:17.127970 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-b4n6f_8921f642-11e6-4efc-9441-9f3ee68ed074/dnsmasq-dns/0.log" Jan 31 05:06:17 crc kubenswrapper[4827]: I0131 05:06:17.276085 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c51b9a5d-c011-422c-8b29-39a9d4355659/glance-log/0.log" Jan 31 05:06:17 crc kubenswrapper[4827]: I0131 05:06:17.372008 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:06:17 crc kubenswrapper[4827]: I0131 05:06:17.372085 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:06:17 crc kubenswrapper[4827]: I0131 05:06:17.431592 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_07de9274-858d-4f45-bb4e-f064e62260c8/glance-log/0.log" Jan 31 05:06:17 crc kubenswrapper[4827]: I0131 05:06:17.506960 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_07de9274-858d-4f45-bb4e-f064e62260c8/glance-httpd/0.log" Jan 31 05:06:17 crc kubenswrapper[4827]: I0131 05:06:17.699259 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5dd8c8746d-r25sr_19b64bcf-afad-4b01-8d57-1c1b56bb170f/horizon/0.log" Jan 31 05:06:17 crc kubenswrapper[4827]: I0131 05:06:17.798760 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-967vq_d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:06:18 crc kubenswrapper[4827]: I0131 05:06:18.013333 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5dd8c8746d-r25sr_19b64bcf-afad-4b01-8d57-1c1b56bb170f/horizon-log/0.log" Jan 31 05:06:18 crc kubenswrapper[4827]: I0131 05:06:18.082636 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-x9cnd_83978973-9bf3-4c9a-9689-d47fd0a7aac4/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:06:18 crc kubenswrapper[4827]: I0131 05:06:18.366505 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29497261-5x5xw_40c636c4-b73c-42b5-87f0-dd2d138bf0c1/keystone-cron/0.log" Jan 31 05:06:18 crc kubenswrapper[4827]: I0131 05:06:18.497160 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a/kube-state-metrics/0.log" Jan 31 05:06:18 crc kubenswrapper[4827]: I0131 05:06:18.616101 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l_7ae3b57a-e575-49ac-b40f-276d244a1855/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:06:18 crc kubenswrapper[4827]: I0131 05:06:18.938821 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_1a40e96a-328e-449e-b6a8-39c6f6ed0aa2/manila-api-log/0.log" Jan 31 05:06:19 crc kubenswrapper[4827]: I0131 05:06:19.082755 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_1a40e96a-328e-449e-b6a8-39c6f6ed0aa2/manila-api/0.log" Jan 31 05:06:19 crc kubenswrapper[4827]: I0131 05:06:19.122455 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-b7768667c-2kxv5_325f82ae-928b-44ea-bef3-e567002d4814/keystone-api/0.log" Jan 31 05:06:19 crc kubenswrapper[4827]: I0131 05:06:19.179289 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_5cdfe2e4-b566-4369-9354-42494e23eb46/probe/0.log" Jan 31 05:06:19 crc kubenswrapper[4827]: I0131 05:06:19.249709 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_5cdfe2e4-b566-4369-9354-42494e23eb46/manila-scheduler/0.log" Jan 31 05:06:19 crc kubenswrapper[4827]: I0131 05:06:19.370968 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_79a2680f-4176-4fe5-9952-c6e74f2c57d6/manila-share/0.log" Jan 31 05:06:19 crc kubenswrapper[4827]: I0131 05:06:19.380590 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_79a2680f-4176-4fe5-9952-c6e74f2c57d6/probe/0.log" Jan 31 05:06:19 crc kubenswrapper[4827]: I0131 05:06:19.870638 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7969d585-whgv9_aae071c1-75f9-40e2-aa1a-69aa0afba58d/neutron-httpd/0.log" Jan 31 05:06:20 crc kubenswrapper[4827]: I0131 05:06:20.005851 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7969d585-whgv9_aae071c1-75f9-40e2-aa1a-69aa0afba58d/neutron-api/0.log" Jan 31 05:06:20 crc kubenswrapper[4827]: I0131 05:06:20.098035 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms_4aeddc0e-5ddf-42a0-8c89-e840171e5c7b/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:06:21 crc kubenswrapper[4827]: I0131 05:06:21.017682 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_1af38ab0-fbfd-463c-8349-39b3ca0d7f9e/nova-cell0-conductor-conductor/0.log" Jan 31 05:06:21 crc kubenswrapper[4827]: I0131 05:06:21.045110 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_461e6e42-8412-4ab9-aa9a-02b27965961d/nova-api-log/0.log" Jan 31 05:06:21 crc kubenswrapper[4827]: I0131 05:06:21.440234 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_461e6e42-8412-4ab9-aa9a-02b27965961d/nova-api-api/0.log" Jan 31 05:06:21 crc kubenswrapper[4827]: I0131 05:06:21.665633 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_70377532-a0c3-4b3f-abda-b712b33df5e5/nova-cell1-conductor-conductor/0.log" Jan 31 05:06:21 crc kubenswrapper[4827]: I0131 05:06:21.963994 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14/nova-cell1-novncproxy-novncproxy/0.log" Jan 31 05:06:22 crc kubenswrapper[4827]: I0131 05:06:22.015274 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9_84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:06:22 crc kubenswrapper[4827]: I0131 05:06:22.248613 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2bfcb9f2-5385-4257-8277-45f3c3af8582/nova-metadata-log/0.log" Jan 31 05:06:22 crc kubenswrapper[4827]: I0131 05:06:22.694213 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a1d91b03-3afc-4a12-a489-d1b97ec8d5fe/nova-scheduler-scheduler/0.log" Jan 31 05:06:23 crc kubenswrapper[4827]: I0131 05:06:23.179008 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a0d3d60f-16f2-469d-8314-9055bb91a9ce/mysql-bootstrap/0.log" Jan 31 05:06:23 crc kubenswrapper[4827]: I0131 05:06:23.399110 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a0d3d60f-16f2-469d-8314-9055bb91a9ce/mysql-bootstrap/0.log" Jan 31 05:06:23 crc kubenswrapper[4827]: I0131 05:06:23.431511 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a0d3d60f-16f2-469d-8314-9055bb91a9ce/galera/0.log" Jan 31 05:06:23 crc kubenswrapper[4827]: I0131 05:06:23.684475 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f66333b7-3406-4a69-85f5-0806b992a625/mysql-bootstrap/0.log" Jan 31 05:06:23 crc kubenswrapper[4827]: I0131 05:06:23.935258 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f66333b7-3406-4a69-85f5-0806b992a625/mysql-bootstrap/0.log" Jan 31 05:06:23 crc kubenswrapper[4827]: I0131 05:06:23.946510 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f66333b7-3406-4a69-85f5-0806b992a625/galera/0.log" Jan 31 05:06:24 crc kubenswrapper[4827]: I0131 05:06:24.162710 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_1e87305f-99c7-4ee4-9813-973bb0a259af/openstackclient/0.log" Jan 31 05:06:24 crc kubenswrapper[4827]: I0131 05:06:24.368808 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-jrrb4_b0a1bcac-47e2-4089-ae1e-98a2dc41d270/ovn-controller/0.log" Jan 31 05:06:24 crc kubenswrapper[4827]: I0131 05:06:24.415656 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2bfcb9f2-5385-4257-8277-45f3c3af8582/nova-metadata-metadata/0.log" Jan 31 05:06:24 crc kubenswrapper[4827]: I0131 05:06:24.621766 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mkbw4_b818dd8b-a3fb-46fa-a8b2-784fb2d3169d/openstack-network-exporter/0.log" Jan 31 05:06:24 crc kubenswrapper[4827]: I0131 05:06:24.801004 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vhmf9_35c80c3a-29fe-4992-a421-f5ce7704ff53/ovsdb-server-init/0.log" Jan 31 05:06:25 crc kubenswrapper[4827]: I0131 05:06:25.053241 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vhmf9_35c80c3a-29fe-4992-a421-f5ce7704ff53/ovs-vswitchd/0.log" Jan 31 05:06:25 crc kubenswrapper[4827]: I0131 05:06:25.067220 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vhmf9_35c80c3a-29fe-4992-a421-f5ce7704ff53/ovsdb-server-init/0.log" Jan 31 05:06:25 crc kubenswrapper[4827]: I0131 05:06:25.092824 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vhmf9_35c80c3a-29fe-4992-a421-f5ce7704ff53/ovsdb-server/0.log" Jan 31 05:06:25 crc kubenswrapper[4827]: I0131 05:06:25.329789 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-52fdv_d2764df8-6296-4363-9a0a-bad8253a8942/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:06:25 crc kubenswrapper[4827]: I0131 05:06:25.344119 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_7c359669-c94b-42d4-9b63-de6d4812e598/cinder-volume/0.log" Jan 31 05:06:25 crc kubenswrapper[4827]: I0131 05:06:25.435580 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e08df3ea-bbcb-4a8e-9de0-39b86fa6672d/openstack-network-exporter/0.log" Jan 31 05:06:25 crc kubenswrapper[4827]: I0131 05:06:25.513855 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e08df3ea-bbcb-4a8e-9de0-39b86fa6672d/ovn-northd/0.log" Jan 31 05:06:25 crc kubenswrapper[4827]: I0131 05:06:25.564017 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a5961815-808d-4f79-867c-763e2946d47f/openstack-network-exporter/0.log" Jan 31 05:06:25 crc kubenswrapper[4827]: I0131 05:06:25.685274 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a5961815-808d-4f79-867c-763e2946d47f/ovsdbserver-nb/0.log" Jan 31 05:06:25 crc kubenswrapper[4827]: I0131 05:06:25.770274 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1f6f952c-d09b-4584-b231-3fb87e5622fd/ovsdbserver-sb/0.log" Jan 31 05:06:25 crc kubenswrapper[4827]: I0131 05:06:25.799343 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1f6f952c-d09b-4584-b231-3fb87e5622fd/openstack-network-exporter/0.log" Jan 31 05:06:26 crc kubenswrapper[4827]: I0131 05:06:26.022406 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-77b487f776-cjb8n_c95d0b83-4630-47c3-ae7b-dae07d072e38/placement-api/0.log" Jan 31 05:06:26 crc kubenswrapper[4827]: I0131 05:06:26.182943 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-77b487f776-cjb8n_c95d0b83-4630-47c3-ae7b-dae07d072e38/placement-log/0.log" Jan 31 05:06:26 crc kubenswrapper[4827]: I0131 05:06:26.219438 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_92323497-4fa1-43f6-98b0-08fa31c47d3a/setup-container/0.log" Jan 31 05:06:26 crc kubenswrapper[4827]: I0131 05:06:26.352525 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_92323497-4fa1-43f6-98b0-08fa31c47d3a/setup-container/0.log" Jan 31 05:06:26 crc kubenswrapper[4827]: I0131 05:06:26.387220 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_92323497-4fa1-43f6-98b0-08fa31c47d3a/rabbitmq/0.log" Jan 31 05:06:26 crc kubenswrapper[4827]: I0131 05:06:26.469237 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bd61984d-518c-44f2-8a18-8bda81bb6af3/setup-container/0.log" Jan 31 05:06:26 crc kubenswrapper[4827]: I0131 05:06:26.666377 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bd61984d-518c-44f2-8a18-8bda81bb6af3/setup-container/0.log" Jan 31 05:06:26 crc kubenswrapper[4827]: I0131 05:06:26.725888 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bd61984d-518c-44f2-8a18-8bda81bb6af3/rabbitmq/0.log" Jan 31 05:06:26 crc kubenswrapper[4827]: I0131 05:06:26.766103 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv_65c68493-a927-4bb7-b013-664e9ae73443/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:06:26 crc kubenswrapper[4827]: I0131 05:06:26.909608 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-scb89_4dec9a4b-08f9-45be-85aa-10bb2a48cdaf/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:06:26 crc kubenswrapper[4827]: I0131 05:06:26.999651 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-9tz78_9c9c5b12-150d-4448-9381-55de889ae8c4/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:06:27 crc kubenswrapper[4827]: I0131 05:06:27.209760 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-42k96_0d5f4456-d112-4cf0-ac82-fc6f693b42ae/ssh-known-hosts-edpm-deployment/0.log" Jan 31 05:06:27 crc kubenswrapper[4827]: I0131 05:06:27.309821 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_9267ff6a-541b-4297-87e4-fb6095cece6e/tempest-tests-tempest-tests-runner/0.log" Jan 31 05:06:27 crc kubenswrapper[4827]: I0131 05:06:27.466556 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_eb9cc63c-f48f-4ded-9ca7-b7167b27a5ad/test-operator-logs-container/0.log" Jan 31 05:06:27 crc kubenswrapper[4827]: I0131 05:06:27.566091 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-gzznq_60326eb4-1b0c-420c-a0f1-e41d58f386a7/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:06:37 crc kubenswrapper[4827]: I0131 05:06:37.207128 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_220c4c53-ac13-4f85-88da-38fef6ce70b1/memcached/0.log" Jan 31 05:06:47 crc kubenswrapper[4827]: I0131 05:06:47.371438 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:06:47 crc kubenswrapper[4827]: I0131 05:06:47.372125 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:06:56 crc kubenswrapper[4827]: I0131 05:06:56.951933 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9_00650213-91a7-4da4-956e-500845f8ec0d/util/0.log" Jan 31 05:06:57 crc kubenswrapper[4827]: I0131 05:06:57.158985 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9_00650213-91a7-4da4-956e-500845f8ec0d/pull/0.log" Jan 31 05:06:57 crc kubenswrapper[4827]: I0131 05:06:57.160742 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9_00650213-91a7-4da4-956e-500845f8ec0d/util/0.log" Jan 31 05:06:57 crc kubenswrapper[4827]: I0131 05:06:57.226510 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9_00650213-91a7-4da4-956e-500845f8ec0d/pull/0.log" Jan 31 05:06:57 crc kubenswrapper[4827]: I0131 05:06:57.361061 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9_00650213-91a7-4da4-956e-500845f8ec0d/pull/0.log" Jan 31 05:06:57 crc kubenswrapper[4827]: I0131 05:06:57.381409 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9_00650213-91a7-4da4-956e-500845f8ec0d/util/0.log" Jan 31 05:06:57 crc kubenswrapper[4827]: I0131 05:06:57.390725 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9_00650213-91a7-4da4-956e-500845f8ec0d/extract/0.log" Jan 31 05:06:57 crc kubenswrapper[4827]: I0131 05:06:57.590079 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-k469j_1ee58492-27e7-446f-84c8-c3b0b74884fa/manager/0.log" Jan 31 05:06:57 crc kubenswrapper[4827]: I0131 05:06:57.653207 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7489d7c99b-75s7f_74e68a52-8f24-4ff0-a160-8a1ad61238c9/manager/0.log" Jan 31 05:06:57 crc kubenswrapper[4827]: I0131 05:06:57.743498 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-9k4dq_60792734-916b-4bb7-a17f-45a03be036c8/manager/0.log" Jan 31 05:06:57 crc kubenswrapper[4827]: I0131 05:06:57.915025 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-zdtlh_c0c17a5a-5f0d-421e-b29c-56c4f2626a7b/manager/0.log" Jan 31 05:06:57 crc kubenswrapper[4827]: I0131 05:06:57.965111 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-hprpc_fe50fb01-1097-4ac9-81ae-fdfc96842f68/manager/0.log" Jan 31 05:06:58 crc kubenswrapper[4827]: I0131 05:06:58.062198 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-wwvbx_bbf882c7-842b-46eb-a459-bb628db2598f/manager/0.log" Jan 31 05:06:58 crc kubenswrapper[4827]: I0131 05:06:58.329970 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-r2ljw_adfd32af-9db4-468a-bac1-d33f11930922/manager/0.log" Jan 31 05:06:58 crc kubenswrapper[4827]: I0131 05:06:58.443324 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-gcs7k_00f00c32-1e04-42e4-95b4-923c6b57386e/manager/0.log" Jan 31 05:06:58 crc kubenswrapper[4827]: I0131 05:06:58.544102 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-8hvrl_efcd65a1-b55c-4cf6-bfe7-5e888e2bc7f0/manager/0.log" Jan 31 05:06:58 crc kubenswrapper[4827]: I0131 05:06:58.624728 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-2z575_fe5adffe-e198-4d4f-815d-02333b3a1853/manager/0.log" Jan 31 05:06:58 crc kubenswrapper[4827]: I0131 05:06:58.784654 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-dvj6j_7f0021a0-f8df-42fa-8ef0-34653130a6e9/manager/0.log" Jan 31 05:06:58 crc kubenswrapper[4827]: I0131 05:06:58.870176 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-wdrl7_ea6ee14b-2acc-4894-8d63-57ad4a6a170a/manager/0.log" Jan 31 05:06:59 crc kubenswrapper[4827]: I0131 05:06:59.081397 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-k7f4f_b3c58b9c-4561-49ae-a23c-a77a34b8cfb5/manager/0.log" Jan 31 05:06:59 crc kubenswrapper[4827]: I0131 05:06:59.102969 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-4snkb_8d904b59-3b07-422e-a83b-a02ac443d6eb/manager/0.log" Jan 31 05:06:59 crc kubenswrapper[4827]: I0131 05:06:59.278526 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf_ff81629a-d048-4c5d-b3a4-b892310ceff7/manager/0.log" Jan 31 05:06:59 crc kubenswrapper[4827]: I0131 05:06:59.504681 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-68ffdbb6cf-tmt7z_062d81b0-3054-4387-9b68-716c6b57c850/operator/0.log" Jan 31 05:07:00 crc kubenswrapper[4827]: I0131 05:07:00.485368 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-9l7pz_2ddccb17-c139-46fa-a62e-efdc15bbab1b/registry-server/0.log" Jan 31 05:07:00 crc kubenswrapper[4827]: I0131 05:07:00.542918 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-782zz_fb454f09-c6b8-41f4-b69f-3125e8d4d79f/manager/0.log" Jan 31 05:07:00 crc kubenswrapper[4827]: I0131 05:07:00.818370 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-9gs2r_0af88c77-1c9c-4072-b0da-707bca0f4f12/manager/0.log" Jan 31 05:07:00 crc kubenswrapper[4827]: I0131 05:07:00.851647 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zdjjp_0d85c53f-5192-4621-86cc-d9403773713b/operator/0.log" Jan 31 05:07:01 crc kubenswrapper[4827]: I0131 05:07:01.064775 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-6jhd8_ddb4ccbd-d7ed-4c26-97c4-22ce6c38b431/manager/0.log" Jan 31 05:07:01 crc kubenswrapper[4827]: I0131 05:07:01.136851 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-plj6q_4d581cf6-c77f-4757-9091-cb1e23bfbcda/manager/0.log" Jan 31 05:07:01 crc kubenswrapper[4827]: I0131 05:07:01.461792 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-fr6qf_0d53929a-c249-47fa-9d02-98021a8bcf2a/manager/0.log" Jan 31 05:07:01 crc kubenswrapper[4827]: I0131 05:07:01.531638 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-m97nw_5666901d-66a6-4282-b44c-c39a0721faa2/manager/0.log" Jan 31 05:07:01 crc kubenswrapper[4827]: I0131 05:07:01.705212 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-794bbdbc56-fvlbd_a7d7d7a5-296a-43d3-8c15-906a257549c2/manager/0.log" Jan 31 05:07:17 crc kubenswrapper[4827]: I0131 05:07:17.371638 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:07:17 crc kubenswrapper[4827]: I0131 05:07:17.372346 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:07:17 crc kubenswrapper[4827]: I0131 05:07:17.372401 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 05:07:17 crc kubenswrapper[4827]: I0131 05:07:17.373281 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b862b8b1b774ed91171674cc9d08491ecba45701fdc6d37af4df3acdb0e0dc6"} pod="openshift-machine-config-operator/machine-config-daemon-jxh94" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:07:17 crc kubenswrapper[4827]: I0131 05:07:17.373335 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" containerID="cri-o://8b862b8b1b774ed91171674cc9d08491ecba45701fdc6d37af4df3acdb0e0dc6" gracePeriod=600 Jan 31 05:07:17 crc kubenswrapper[4827]: I0131 05:07:17.708541 4827 generic.go:334] "Generic (PLEG): container finished" podID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerID="8b862b8b1b774ed91171674cc9d08491ecba45701fdc6d37af4df3acdb0e0dc6" exitCode=0 Jan 31 05:07:17 crc kubenswrapper[4827]: I0131 05:07:17.708860 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerDied","Data":"8b862b8b1b774ed91171674cc9d08491ecba45701fdc6d37af4df3acdb0e0dc6"} Jan 31 05:07:17 crc kubenswrapper[4827]: I0131 05:07:17.708912 4827 scope.go:117] "RemoveContainer" containerID="b42e19e8839f8c6eed889e7aa41e227379f950f6d8d29310a2158f9035f65c48" Jan 31 05:07:18 crc kubenswrapper[4827]: I0131 05:07:18.723458 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82"} Jan 31 05:07:22 crc kubenswrapper[4827]: I0131 05:07:22.623870 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mrplz_7bd339a8-f5bb-4f7f-9d9d-e57deef990b8/control-plane-machine-set-operator/0.log" Jan 31 05:07:22 crc kubenswrapper[4827]: I0131 05:07:22.808144 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-h2fxz_899b03ec-0d91-4793-a5a2-d3aca48e5309/kube-rbac-proxy/0.log" Jan 31 05:07:22 crc kubenswrapper[4827]: I0131 05:07:22.810725 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-h2fxz_899b03ec-0d91-4793-a5a2-d3aca48e5309/machine-api-operator/0.log" Jan 31 05:07:36 crc kubenswrapper[4827]: I0131 05:07:36.937917 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-9hxtf_bef48f94-220d-4244-8412-0fbb3c3a08a6/cert-manager-controller/0.log" Jan 31 05:07:37 crc kubenswrapper[4827]: I0131 05:07:37.096533 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-bdz52_3f73fff6-a495-43d2-b063-ed9792fa2526/cert-manager-cainjector/0.log" Jan 31 05:07:37 crc kubenswrapper[4827]: I0131 05:07:37.144494 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-lg7rt_8f78df48-021e-4d81-afac-ae4dc1b7f932/cert-manager-webhook/0.log" Jan 31 05:07:52 crc kubenswrapper[4827]: I0131 05:07:52.679469 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-49c59"] Jan 31 05:07:52 crc kubenswrapper[4827]: E0131 05:07:52.680168 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="853f00cc-27e7-4c7f-9582-19e5e1783126" containerName="registry-server" Jan 31 05:07:52 crc kubenswrapper[4827]: I0131 05:07:52.680179 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="853f00cc-27e7-4c7f-9582-19e5e1783126" containerName="registry-server" Jan 31 05:07:52 crc kubenswrapper[4827]: E0131 05:07:52.680197 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="853f00cc-27e7-4c7f-9582-19e5e1783126" containerName="extract-content" Jan 31 05:07:52 crc kubenswrapper[4827]: I0131 05:07:52.680219 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="853f00cc-27e7-4c7f-9582-19e5e1783126" containerName="extract-content" Jan 31 05:07:52 crc kubenswrapper[4827]: E0131 05:07:52.680252 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="853f00cc-27e7-4c7f-9582-19e5e1783126" containerName="extract-utilities" Jan 31 05:07:52 crc kubenswrapper[4827]: I0131 05:07:52.680258 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="853f00cc-27e7-4c7f-9582-19e5e1783126" containerName="extract-utilities" Jan 31 05:07:52 crc kubenswrapper[4827]: I0131 05:07:52.680415 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="853f00cc-27e7-4c7f-9582-19e5e1783126" containerName="registry-server" Jan 31 05:07:52 crc kubenswrapper[4827]: I0131 05:07:52.681620 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-49c59" Jan 31 05:07:52 crc kubenswrapper[4827]: I0131 05:07:52.709134 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-49c59"] Jan 31 05:07:52 crc kubenswrapper[4827]: I0131 05:07:52.809441 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19fb7a9a-274c-43ca-8fe7-89946f3ae3c0-catalog-content\") pod \"redhat-marketplace-49c59\" (UID: \"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0\") " pod="openshift-marketplace/redhat-marketplace-49c59" Jan 31 05:07:52 crc kubenswrapper[4827]: I0131 05:07:52.809527 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19fb7a9a-274c-43ca-8fe7-89946f3ae3c0-utilities\") pod \"redhat-marketplace-49c59\" (UID: \"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0\") " pod="openshift-marketplace/redhat-marketplace-49c59" Jan 31 05:07:52 crc kubenswrapper[4827]: I0131 05:07:52.809568 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j22v5\" (UniqueName: \"kubernetes.io/projected/19fb7a9a-274c-43ca-8fe7-89946f3ae3c0-kube-api-access-j22v5\") pod \"redhat-marketplace-49c59\" (UID: \"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0\") " pod="openshift-marketplace/redhat-marketplace-49c59" Jan 31 05:07:52 crc kubenswrapper[4827]: I0131 05:07:52.846219 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-8c97f_fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64/nmstate-console-plugin/0.log" Jan 31 05:07:52 crc kubenswrapper[4827]: I0131 05:07:52.911817 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19fb7a9a-274c-43ca-8fe7-89946f3ae3c0-catalog-content\") pod \"redhat-marketplace-49c59\" (UID: \"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0\") " pod="openshift-marketplace/redhat-marketplace-49c59" Jan 31 05:07:52 crc kubenswrapper[4827]: I0131 05:07:52.911944 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19fb7a9a-274c-43ca-8fe7-89946f3ae3c0-utilities\") pod \"redhat-marketplace-49c59\" (UID: \"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0\") " pod="openshift-marketplace/redhat-marketplace-49c59" Jan 31 05:07:52 crc kubenswrapper[4827]: I0131 05:07:52.911991 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j22v5\" (UniqueName: \"kubernetes.io/projected/19fb7a9a-274c-43ca-8fe7-89946f3ae3c0-kube-api-access-j22v5\") pod \"redhat-marketplace-49c59\" (UID: \"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0\") " pod="openshift-marketplace/redhat-marketplace-49c59" Jan 31 05:07:52 crc kubenswrapper[4827]: I0131 05:07:52.912448 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19fb7a9a-274c-43ca-8fe7-89946f3ae3c0-catalog-content\") pod \"redhat-marketplace-49c59\" (UID: \"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0\") " pod="openshift-marketplace/redhat-marketplace-49c59" Jan 31 05:07:52 crc kubenswrapper[4827]: I0131 05:07:52.912479 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19fb7a9a-274c-43ca-8fe7-89946f3ae3c0-utilities\") pod \"redhat-marketplace-49c59\" (UID: \"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0\") " pod="openshift-marketplace/redhat-marketplace-49c59" Jan 31 05:07:52 crc kubenswrapper[4827]: I0131 05:07:52.988518 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j22v5\" (UniqueName: \"kubernetes.io/projected/19fb7a9a-274c-43ca-8fe7-89946f3ae3c0-kube-api-access-j22v5\") pod \"redhat-marketplace-49c59\" (UID: \"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0\") " pod="openshift-marketplace/redhat-marketplace-49c59" Jan 31 05:07:53 crc kubenswrapper[4827]: I0131 05:07:53.012618 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-49c59" Jan 31 05:07:53 crc kubenswrapper[4827]: I0131 05:07:53.202064 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-b7ttc_d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6/nmstate-handler/0.log" Jan 31 05:07:53 crc kubenswrapper[4827]: I0131 05:07:53.281451 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-vxb8z_259273b1-36c1-4c94-846c-dd21b325059d/nmstate-metrics/0.log" Jan 31 05:07:53 crc kubenswrapper[4827]: I0131 05:07:53.307393 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-vxb8z_259273b1-36c1-4c94-846c-dd21b325059d/kube-rbac-proxy/0.log" Jan 31 05:07:53 crc kubenswrapper[4827]: I0131 05:07:53.501089 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-49c59"] Jan 31 05:07:53 crc kubenswrapper[4827]: I0131 05:07:53.575402 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-g5wcr_5899df86-4812-4477-92fb-bcd326c34f2a/nmstate-operator/0.log" Jan 31 05:07:53 crc kubenswrapper[4827]: I0131 05:07:53.727540 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-blkw2_0abf6fbb-878e-4f5f-99ef-969e12458804/nmstate-webhook/0.log" Jan 31 05:07:54 crc kubenswrapper[4827]: I0131 05:07:54.071628 4827 generic.go:334] "Generic (PLEG): container finished" podID="19fb7a9a-274c-43ca-8fe7-89946f3ae3c0" containerID="b536358c5fed436854b530d1fc89f1f6431c7821a020a90a88a83a10372299e9" exitCode=0 Jan 31 05:07:54 crc kubenswrapper[4827]: I0131 05:07:54.071685 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49c59" event={"ID":"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0","Type":"ContainerDied","Data":"b536358c5fed436854b530d1fc89f1f6431c7821a020a90a88a83a10372299e9"} Jan 31 05:07:54 crc kubenswrapper[4827]: I0131 05:07:54.071730 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49c59" event={"ID":"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0","Type":"ContainerStarted","Data":"77fdda418f08ff2a9f09bde391a61afb942c4fbb39a6decc821227928f089d0f"} Jan 31 05:07:54 crc kubenswrapper[4827]: I0131 05:07:54.075002 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 05:07:55 crc kubenswrapper[4827]: I0131 05:07:55.081423 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49c59" event={"ID":"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0","Type":"ContainerStarted","Data":"363ac7d65c6ad00c5810fb221d0df54581505994dfdf250e47bf54f3dd547c4a"} Jan 31 05:07:57 crc kubenswrapper[4827]: I0131 05:07:57.099989 4827 generic.go:334] "Generic (PLEG): container finished" podID="19fb7a9a-274c-43ca-8fe7-89946f3ae3c0" containerID="363ac7d65c6ad00c5810fb221d0df54581505994dfdf250e47bf54f3dd547c4a" exitCode=0 Jan 31 05:07:57 crc kubenswrapper[4827]: I0131 05:07:57.100057 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49c59" event={"ID":"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0","Type":"ContainerDied","Data":"363ac7d65c6ad00c5810fb221d0df54581505994dfdf250e47bf54f3dd547c4a"} Jan 31 05:07:58 crc kubenswrapper[4827]: I0131 05:07:58.120389 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49c59" event={"ID":"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0","Type":"ContainerStarted","Data":"d00e5175381b2f2cc4f5f1ad8c75c8e9d0eb28ba28c0c8469c81975d9c3458d7"} Jan 31 05:07:58 crc kubenswrapper[4827]: I0131 05:07:58.149056 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-49c59" podStartSLOduration=2.376134951 podStartE2EDuration="6.149035194s" podCreationTimestamp="2026-01-31 05:07:52 +0000 UTC" firstStartedPulling="2026-01-31 05:07:54.074676284 +0000 UTC m=+4866.761756743" lastFinishedPulling="2026-01-31 05:07:57.847576537 +0000 UTC m=+4870.534656986" observedRunningTime="2026-01-31 05:07:58.14209808 +0000 UTC m=+4870.829178529" watchObservedRunningTime="2026-01-31 05:07:58.149035194 +0000 UTC m=+4870.836115643" Jan 31 05:08:03 crc kubenswrapper[4827]: I0131 05:08:03.014133 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-49c59" Jan 31 05:08:03 crc kubenswrapper[4827]: I0131 05:08:03.014766 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-49c59" Jan 31 05:08:03 crc kubenswrapper[4827]: I0131 05:08:03.069340 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-49c59" Jan 31 05:08:03 crc kubenswrapper[4827]: I0131 05:08:03.202456 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-49c59" Jan 31 05:08:03 crc kubenswrapper[4827]: I0131 05:08:03.310171 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-49c59"] Jan 31 05:08:05 crc kubenswrapper[4827]: I0131 05:08:05.171706 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-49c59" podUID="19fb7a9a-274c-43ca-8fe7-89946f3ae3c0" containerName="registry-server" containerID="cri-o://d00e5175381b2f2cc4f5f1ad8c75c8e9d0eb28ba28c0c8469c81975d9c3458d7" gracePeriod=2 Jan 31 05:08:05 crc kubenswrapper[4827]: I0131 05:08:05.664390 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-49c59" Jan 31 05:08:05 crc kubenswrapper[4827]: I0131 05:08:05.752223 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19fb7a9a-274c-43ca-8fe7-89946f3ae3c0-utilities\") pod \"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0\" (UID: \"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0\") " Jan 31 05:08:05 crc kubenswrapper[4827]: I0131 05:08:05.752651 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19fb7a9a-274c-43ca-8fe7-89946f3ae3c0-catalog-content\") pod \"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0\" (UID: \"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0\") " Jan 31 05:08:05 crc kubenswrapper[4827]: I0131 05:08:05.752748 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j22v5\" (UniqueName: \"kubernetes.io/projected/19fb7a9a-274c-43ca-8fe7-89946f3ae3c0-kube-api-access-j22v5\") pod \"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0\" (UID: \"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0\") " Jan 31 05:08:05 crc kubenswrapper[4827]: I0131 05:08:05.753244 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19fb7a9a-274c-43ca-8fe7-89946f3ae3c0-utilities" (OuterVolumeSpecName: "utilities") pod "19fb7a9a-274c-43ca-8fe7-89946f3ae3c0" (UID: "19fb7a9a-274c-43ca-8fe7-89946f3ae3c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:08:05 crc kubenswrapper[4827]: I0131 05:08:05.760286 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19fb7a9a-274c-43ca-8fe7-89946f3ae3c0-kube-api-access-j22v5" (OuterVolumeSpecName: "kube-api-access-j22v5") pod "19fb7a9a-274c-43ca-8fe7-89946f3ae3c0" (UID: "19fb7a9a-274c-43ca-8fe7-89946f3ae3c0"). InnerVolumeSpecName "kube-api-access-j22v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:08:05 crc kubenswrapper[4827]: I0131 05:08:05.787616 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19fb7a9a-274c-43ca-8fe7-89946f3ae3c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19fb7a9a-274c-43ca-8fe7-89946f3ae3c0" (UID: "19fb7a9a-274c-43ca-8fe7-89946f3ae3c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:08:05 crc kubenswrapper[4827]: I0131 05:08:05.855587 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j22v5\" (UniqueName: \"kubernetes.io/projected/19fb7a9a-274c-43ca-8fe7-89946f3ae3c0-kube-api-access-j22v5\") on node \"crc\" DevicePath \"\"" Jan 31 05:08:05 crc kubenswrapper[4827]: I0131 05:08:05.855632 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19fb7a9a-274c-43ca-8fe7-89946f3ae3c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:08:05 crc kubenswrapper[4827]: I0131 05:08:05.855651 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19fb7a9a-274c-43ca-8fe7-89946f3ae3c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:08:06 crc kubenswrapper[4827]: I0131 05:08:06.180734 4827 generic.go:334] "Generic (PLEG): container finished" podID="19fb7a9a-274c-43ca-8fe7-89946f3ae3c0" containerID="d00e5175381b2f2cc4f5f1ad8c75c8e9d0eb28ba28c0c8469c81975d9c3458d7" exitCode=0 Jan 31 05:08:06 crc kubenswrapper[4827]: I0131 05:08:06.180795 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49c59" event={"ID":"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0","Type":"ContainerDied","Data":"d00e5175381b2f2cc4f5f1ad8c75c8e9d0eb28ba28c0c8469c81975d9c3458d7"} Jan 31 05:08:06 crc kubenswrapper[4827]: I0131 05:08:06.181139 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49c59" event={"ID":"19fb7a9a-274c-43ca-8fe7-89946f3ae3c0","Type":"ContainerDied","Data":"77fdda418f08ff2a9f09bde391a61afb942c4fbb39a6decc821227928f089d0f"} Jan 31 05:08:06 crc kubenswrapper[4827]: I0131 05:08:06.181163 4827 scope.go:117] "RemoveContainer" containerID="d00e5175381b2f2cc4f5f1ad8c75c8e9d0eb28ba28c0c8469c81975d9c3458d7" Jan 31 05:08:06 crc kubenswrapper[4827]: I0131 05:08:06.180808 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-49c59" Jan 31 05:08:06 crc kubenswrapper[4827]: I0131 05:08:06.201884 4827 scope.go:117] "RemoveContainer" containerID="363ac7d65c6ad00c5810fb221d0df54581505994dfdf250e47bf54f3dd547c4a" Jan 31 05:08:06 crc kubenswrapper[4827]: I0131 05:08:06.228021 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-49c59"] Jan 31 05:08:06 crc kubenswrapper[4827]: I0131 05:08:06.228253 4827 scope.go:117] "RemoveContainer" containerID="b536358c5fed436854b530d1fc89f1f6431c7821a020a90a88a83a10372299e9" Jan 31 05:08:06 crc kubenswrapper[4827]: I0131 05:08:06.235518 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-49c59"] Jan 31 05:08:06 crc kubenswrapper[4827]: I0131 05:08:06.294245 4827 scope.go:117] "RemoveContainer" containerID="d00e5175381b2f2cc4f5f1ad8c75c8e9d0eb28ba28c0c8469c81975d9c3458d7" Jan 31 05:08:06 crc kubenswrapper[4827]: E0131 05:08:06.294986 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d00e5175381b2f2cc4f5f1ad8c75c8e9d0eb28ba28c0c8469c81975d9c3458d7\": container with ID starting with d00e5175381b2f2cc4f5f1ad8c75c8e9d0eb28ba28c0c8469c81975d9c3458d7 not found: ID does not exist" containerID="d00e5175381b2f2cc4f5f1ad8c75c8e9d0eb28ba28c0c8469c81975d9c3458d7" Jan 31 05:08:06 crc kubenswrapper[4827]: I0131 05:08:06.295032 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d00e5175381b2f2cc4f5f1ad8c75c8e9d0eb28ba28c0c8469c81975d9c3458d7"} err="failed to get container status \"d00e5175381b2f2cc4f5f1ad8c75c8e9d0eb28ba28c0c8469c81975d9c3458d7\": rpc error: code = NotFound desc = could not find container \"d00e5175381b2f2cc4f5f1ad8c75c8e9d0eb28ba28c0c8469c81975d9c3458d7\": container with ID starting with d00e5175381b2f2cc4f5f1ad8c75c8e9d0eb28ba28c0c8469c81975d9c3458d7 not found: ID does not exist" Jan 31 05:08:06 crc kubenswrapper[4827]: I0131 05:08:06.295082 4827 scope.go:117] "RemoveContainer" containerID="363ac7d65c6ad00c5810fb221d0df54581505994dfdf250e47bf54f3dd547c4a" Jan 31 05:08:06 crc kubenswrapper[4827]: E0131 05:08:06.295433 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"363ac7d65c6ad00c5810fb221d0df54581505994dfdf250e47bf54f3dd547c4a\": container with ID starting with 363ac7d65c6ad00c5810fb221d0df54581505994dfdf250e47bf54f3dd547c4a not found: ID does not exist" containerID="363ac7d65c6ad00c5810fb221d0df54581505994dfdf250e47bf54f3dd547c4a" Jan 31 05:08:06 crc kubenswrapper[4827]: I0131 05:08:06.295461 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"363ac7d65c6ad00c5810fb221d0df54581505994dfdf250e47bf54f3dd547c4a"} err="failed to get container status \"363ac7d65c6ad00c5810fb221d0df54581505994dfdf250e47bf54f3dd547c4a\": rpc error: code = NotFound desc = could not find container \"363ac7d65c6ad00c5810fb221d0df54581505994dfdf250e47bf54f3dd547c4a\": container with ID starting with 363ac7d65c6ad00c5810fb221d0df54581505994dfdf250e47bf54f3dd547c4a not found: ID does not exist" Jan 31 05:08:06 crc kubenswrapper[4827]: I0131 05:08:06.295478 4827 scope.go:117] "RemoveContainer" containerID="b536358c5fed436854b530d1fc89f1f6431c7821a020a90a88a83a10372299e9" Jan 31 05:08:06 crc kubenswrapper[4827]: E0131 05:08:06.295823 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b536358c5fed436854b530d1fc89f1f6431c7821a020a90a88a83a10372299e9\": container with ID starting with b536358c5fed436854b530d1fc89f1f6431c7821a020a90a88a83a10372299e9 not found: ID does not exist" containerID="b536358c5fed436854b530d1fc89f1f6431c7821a020a90a88a83a10372299e9" Jan 31 05:08:06 crc kubenswrapper[4827]: I0131 05:08:06.295843 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b536358c5fed436854b530d1fc89f1f6431c7821a020a90a88a83a10372299e9"} err="failed to get container status \"b536358c5fed436854b530d1fc89f1f6431c7821a020a90a88a83a10372299e9\": rpc error: code = NotFound desc = could not find container \"b536358c5fed436854b530d1fc89f1f6431c7821a020a90a88a83a10372299e9\": container with ID starting with b536358c5fed436854b530d1fc89f1f6431c7821a020a90a88a83a10372299e9 not found: ID does not exist" Jan 31 05:08:08 crc kubenswrapper[4827]: I0131 05:08:08.120560 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19fb7a9a-274c-43ca-8fe7-89946f3ae3c0" path="/var/lib/kubelet/pods/19fb7a9a-274c-43ca-8fe7-89946f3ae3c0/volumes" Jan 31 05:08:23 crc kubenswrapper[4827]: I0131 05:08:23.188932 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lxksh_3aa38fee-8a56-42e4-9921-52dfdc3550c0/kube-rbac-proxy/0.log" Jan 31 05:08:23 crc kubenswrapper[4827]: I0131 05:08:23.332905 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lxksh_3aa38fee-8a56-42e4-9921-52dfdc3550c0/controller/0.log" Jan 31 05:08:23 crc kubenswrapper[4827]: I0131 05:08:23.383105 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-frr-files/0.log" Jan 31 05:08:23 crc kubenswrapper[4827]: I0131 05:08:23.525809 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-frr-files/0.log" Jan 31 05:08:23 crc kubenswrapper[4827]: I0131 05:08:23.583629 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-metrics/0.log" Jan 31 05:08:23 crc kubenswrapper[4827]: I0131 05:08:23.584953 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-reloader/0.log" Jan 31 05:08:23 crc kubenswrapper[4827]: I0131 05:08:23.604214 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-reloader/0.log" Jan 31 05:08:23 crc kubenswrapper[4827]: I0131 05:08:23.737196 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-metrics/0.log" Jan 31 05:08:23 crc kubenswrapper[4827]: I0131 05:08:23.739750 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-reloader/0.log" Jan 31 05:08:23 crc kubenswrapper[4827]: I0131 05:08:23.756261 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-frr-files/0.log" Jan 31 05:08:23 crc kubenswrapper[4827]: I0131 05:08:23.767596 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-metrics/0.log" Jan 31 05:08:23 crc kubenswrapper[4827]: I0131 05:08:23.919905 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-frr-files/0.log" Jan 31 05:08:23 crc kubenswrapper[4827]: I0131 05:08:23.930639 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-reloader/0.log" Jan 31 05:08:23 crc kubenswrapper[4827]: I0131 05:08:23.930693 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-metrics/0.log" Jan 31 05:08:23 crc kubenswrapper[4827]: I0131 05:08:23.982965 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/controller/0.log" Jan 31 05:08:24 crc kubenswrapper[4827]: I0131 05:08:24.121445 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/frr-metrics/0.log" Jan 31 05:08:24 crc kubenswrapper[4827]: I0131 05:08:24.121863 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/kube-rbac-proxy/0.log" Jan 31 05:08:24 crc kubenswrapper[4827]: I0131 05:08:24.167515 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/kube-rbac-proxy-frr/0.log" Jan 31 05:08:24 crc kubenswrapper[4827]: I0131 05:08:24.331868 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/reloader/0.log" Jan 31 05:08:24 crc kubenswrapper[4827]: I0131 05:08:24.400029 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-4q2p4_aa13a755-e11c-471f-9318-7f0b54e8889e/frr-k8s-webhook-server/0.log" Jan 31 05:08:24 crc kubenswrapper[4827]: I0131 05:08:24.638485 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5dfffc88b-rknwp_ab6e6231-c7d2-4c65-89d2-bd6771c99585/manager/0.log" Jan 31 05:08:24 crc kubenswrapper[4827]: I0131 05:08:24.803970 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6997fd6b6c-rxw9p_cac01594-063e-4099-b7fc-11e5d034cd2c/webhook-server/0.log" Jan 31 05:08:24 crc kubenswrapper[4827]: I0131 05:08:24.880215 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tp4jl_44cea0f8-c757-4c9e-bd44-210bed605301/kube-rbac-proxy/0.log" Jan 31 05:08:25 crc kubenswrapper[4827]: I0131 05:08:25.443314 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tp4jl_44cea0f8-c757-4c9e-bd44-210bed605301/speaker/0.log" Jan 31 05:08:25 crc kubenswrapper[4827]: I0131 05:08:25.698329 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/frr/0.log" Jan 31 05:08:38 crc kubenswrapper[4827]: I0131 05:08:38.294427 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs_4f7eae5f-3ee4-478f-928c-ee25fab2d488/util/0.log" Jan 31 05:08:38 crc kubenswrapper[4827]: I0131 05:08:38.402211 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs_4f7eae5f-3ee4-478f-928c-ee25fab2d488/util/0.log" Jan 31 05:08:38 crc kubenswrapper[4827]: I0131 05:08:38.509719 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs_4f7eae5f-3ee4-478f-928c-ee25fab2d488/pull/0.log" Jan 31 05:08:38 crc kubenswrapper[4827]: I0131 05:08:38.515334 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs_4f7eae5f-3ee4-478f-928c-ee25fab2d488/pull/0.log" Jan 31 05:08:38 crc kubenswrapper[4827]: I0131 05:08:38.614656 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs_4f7eae5f-3ee4-478f-928c-ee25fab2d488/pull/0.log" Jan 31 05:08:38 crc kubenswrapper[4827]: I0131 05:08:38.614788 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs_4f7eae5f-3ee4-478f-928c-ee25fab2d488/util/0.log" Jan 31 05:08:38 crc kubenswrapper[4827]: I0131 05:08:38.683225 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs_4f7eae5f-3ee4-478f-928c-ee25fab2d488/extract/0.log" Jan 31 05:08:38 crc kubenswrapper[4827]: I0131 05:08:38.795407 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4_3bd71c58-cce1-40f3-b951-8b414eec7cd6/util/0.log" Jan 31 05:08:38 crc kubenswrapper[4827]: I0131 05:08:38.960563 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4_3bd71c58-cce1-40f3-b951-8b414eec7cd6/util/0.log" Jan 31 05:08:38 crc kubenswrapper[4827]: I0131 05:08:38.963067 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4_3bd71c58-cce1-40f3-b951-8b414eec7cd6/pull/0.log" Jan 31 05:08:38 crc kubenswrapper[4827]: I0131 05:08:38.982023 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4_3bd71c58-cce1-40f3-b951-8b414eec7cd6/pull/0.log" Jan 31 05:08:39 crc kubenswrapper[4827]: I0131 05:08:39.159953 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4_3bd71c58-cce1-40f3-b951-8b414eec7cd6/pull/0.log" Jan 31 05:08:39 crc kubenswrapper[4827]: I0131 05:08:39.177634 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4_3bd71c58-cce1-40f3-b951-8b414eec7cd6/util/0.log" Jan 31 05:08:39 crc kubenswrapper[4827]: I0131 05:08:39.203557 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4_3bd71c58-cce1-40f3-b951-8b414eec7cd6/extract/0.log" Jan 31 05:08:39 crc kubenswrapper[4827]: I0131 05:08:39.324922 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p9mnx_a985eee9-b75b-499b-bbdb-fb1f3437ff77/extract-utilities/0.log" Jan 31 05:08:39 crc kubenswrapper[4827]: I0131 05:08:39.494807 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p9mnx_a985eee9-b75b-499b-bbdb-fb1f3437ff77/extract-utilities/0.log" Jan 31 05:08:39 crc kubenswrapper[4827]: I0131 05:08:39.506773 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p9mnx_a985eee9-b75b-499b-bbdb-fb1f3437ff77/extract-content/0.log" Jan 31 05:08:39 crc kubenswrapper[4827]: I0131 05:08:39.528039 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p9mnx_a985eee9-b75b-499b-bbdb-fb1f3437ff77/extract-content/0.log" Jan 31 05:08:39 crc kubenswrapper[4827]: I0131 05:08:39.713622 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p9mnx_a985eee9-b75b-499b-bbdb-fb1f3437ff77/extract-utilities/0.log" Jan 31 05:08:39 crc kubenswrapper[4827]: I0131 05:08:39.730587 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p9mnx_a985eee9-b75b-499b-bbdb-fb1f3437ff77/extract-content/0.log" Jan 31 05:08:39 crc kubenswrapper[4827]: I0131 05:08:39.917977 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ppx2z_4cf906b5-5bd6-43ba-82b4-008d0b9f7b35/extract-utilities/0.log" Jan 31 05:08:40 crc kubenswrapper[4827]: I0131 05:08:40.133972 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ppx2z_4cf906b5-5bd6-43ba-82b4-008d0b9f7b35/extract-content/0.log" Jan 31 05:08:40 crc kubenswrapper[4827]: I0131 05:08:40.140526 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ppx2z_4cf906b5-5bd6-43ba-82b4-008d0b9f7b35/extract-utilities/0.log" Jan 31 05:08:40 crc kubenswrapper[4827]: I0131 05:08:40.157912 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ppx2z_4cf906b5-5bd6-43ba-82b4-008d0b9f7b35/extract-content/0.log" Jan 31 05:08:40 crc kubenswrapper[4827]: I0131 05:08:40.168215 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p9mnx_a985eee9-b75b-499b-bbdb-fb1f3437ff77/registry-server/0.log" Jan 31 05:08:40 crc kubenswrapper[4827]: I0131 05:08:40.339144 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ppx2z_4cf906b5-5bd6-43ba-82b4-008d0b9f7b35/extract-content/0.log" Jan 31 05:08:40 crc kubenswrapper[4827]: I0131 05:08:40.363855 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ppx2z_4cf906b5-5bd6-43ba-82b4-008d0b9f7b35/extract-utilities/0.log" Jan 31 05:08:40 crc kubenswrapper[4827]: I0131 05:08:40.587466 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-qg47f_14a103c0-b784-4634-9d0e-07cccc0795ef/marketplace-operator/0.log" Jan 31 05:08:40 crc kubenswrapper[4827]: I0131 05:08:40.742118 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sgpz2_9405c6d0-837d-47f0-be6c-79518c22405d/extract-utilities/0.log" Jan 31 05:08:40 crc kubenswrapper[4827]: I0131 05:08:40.960296 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sgpz2_9405c6d0-837d-47f0-be6c-79518c22405d/extract-utilities/0.log" Jan 31 05:08:40 crc kubenswrapper[4827]: I0131 05:08:40.976986 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ppx2z_4cf906b5-5bd6-43ba-82b4-008d0b9f7b35/registry-server/0.log" Jan 31 05:08:40 crc kubenswrapper[4827]: I0131 05:08:40.978058 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sgpz2_9405c6d0-837d-47f0-be6c-79518c22405d/extract-content/0.log" Jan 31 05:08:40 crc kubenswrapper[4827]: I0131 05:08:40.980748 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sgpz2_9405c6d0-837d-47f0-be6c-79518c22405d/extract-content/0.log" Jan 31 05:08:41 crc kubenswrapper[4827]: I0131 05:08:41.133497 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sgpz2_9405c6d0-837d-47f0-be6c-79518c22405d/extract-utilities/0.log" Jan 31 05:08:41 crc kubenswrapper[4827]: I0131 05:08:41.170595 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sgpz2_9405c6d0-837d-47f0-be6c-79518c22405d/extract-content/0.log" Jan 31 05:08:41 crc kubenswrapper[4827]: I0131 05:08:41.303478 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sgpz2_9405c6d0-837d-47f0-be6c-79518c22405d/registry-server/0.log" Jan 31 05:08:41 crc kubenswrapper[4827]: I0131 05:08:41.328299 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pnr9t_77ae4727-de92-4d11-b951-9b2a734acc65/extract-utilities/0.log" Jan 31 05:08:41 crc kubenswrapper[4827]: I0131 05:08:41.461040 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pnr9t_77ae4727-de92-4d11-b951-9b2a734acc65/extract-content/0.log" Jan 31 05:08:41 crc kubenswrapper[4827]: I0131 05:08:41.471415 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pnr9t_77ae4727-de92-4d11-b951-9b2a734acc65/extract-utilities/0.log" Jan 31 05:08:41 crc kubenswrapper[4827]: I0131 05:08:41.471498 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pnr9t_77ae4727-de92-4d11-b951-9b2a734acc65/extract-content/0.log" Jan 31 05:08:41 crc kubenswrapper[4827]: I0131 05:08:41.668361 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pnr9t_77ae4727-de92-4d11-b951-9b2a734acc65/extract-content/0.log" Jan 31 05:08:41 crc kubenswrapper[4827]: I0131 05:08:41.700540 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pnr9t_77ae4727-de92-4d11-b951-9b2a734acc65/extract-utilities/0.log" Jan 31 05:08:42 crc kubenswrapper[4827]: I0131 05:08:42.427182 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pnr9t_77ae4727-de92-4d11-b951-9b2a734acc65/registry-server/0.log" Jan 31 05:09:03 crc kubenswrapper[4827]: I0131 05:09:03.879244 4827 scope.go:117] "RemoveContainer" containerID="ccddde922311e8f49ed56e498fb634d8c80337fa812802fee33dee46aeaa8277" Jan 31 05:09:03 crc kubenswrapper[4827]: I0131 05:09:03.925743 4827 scope.go:117] "RemoveContainer" containerID="d4b9bec8d05d27faebe2a86b9da0bfd60ee9ccc37f7166e43c484bf1cf8a38b9" Jan 31 05:09:03 crc kubenswrapper[4827]: I0131 05:09:03.953200 4827 scope.go:117] "RemoveContainer" containerID="5461085b605ac0eed508517472e20c79190cb9596f548ea2cfa074c7e2ec6ed5" Jan 31 05:09:17 crc kubenswrapper[4827]: I0131 05:09:17.370988 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:09:17 crc kubenswrapper[4827]: I0131 05:09:17.371636 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:09:47 crc kubenswrapper[4827]: I0131 05:09:47.371508 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:09:47 crc kubenswrapper[4827]: I0131 05:09:47.372116 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:10:17 crc kubenswrapper[4827]: I0131 05:10:17.371418 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:10:17 crc kubenswrapper[4827]: I0131 05:10:17.372081 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:10:17 crc kubenswrapper[4827]: I0131 05:10:17.372144 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 05:10:17 crc kubenswrapper[4827]: I0131 05:10:17.373211 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82"} pod="openshift-machine-config-operator/machine-config-daemon-jxh94" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:10:17 crc kubenswrapper[4827]: I0131 05:10:17.373303 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" containerID="cri-o://c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" gracePeriod=600 Jan 31 05:10:18 crc kubenswrapper[4827]: E0131 05:10:18.238062 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:10:18 crc kubenswrapper[4827]: I0131 05:10:18.487689 4827 generic.go:334] "Generic (PLEG): container finished" podID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" exitCode=0 Jan 31 05:10:18 crc kubenswrapper[4827]: I0131 05:10:18.487960 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerDied","Data":"c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82"} Jan 31 05:10:18 crc kubenswrapper[4827]: I0131 05:10:18.488101 4827 scope.go:117] "RemoveContainer" containerID="8b862b8b1b774ed91171674cc9d08491ecba45701fdc6d37af4df3acdb0e0dc6" Jan 31 05:10:18 crc kubenswrapper[4827]: I0131 05:10:18.489124 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:10:18 crc kubenswrapper[4827]: E0131 05:10:18.489646 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:10:32 crc kubenswrapper[4827]: I0131 05:10:32.110583 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:10:32 crc kubenswrapper[4827]: E0131 05:10:32.111498 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:10:46 crc kubenswrapper[4827]: I0131 05:10:46.109708 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:10:46 crc kubenswrapper[4827]: E0131 05:10:46.110534 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:10:46 crc kubenswrapper[4827]: I0131 05:10:46.773933 4827 generic.go:334] "Generic (PLEG): container finished" podID="c05540c0-c586-4204-988e-1d8e2a23bae6" containerID="4bc76f2204eeef43d15897ab141f767eb8e5948f318da858f970be14e6a08e21" exitCode=0 Jan 31 05:10:46 crc kubenswrapper[4827]: I0131 05:10:46.773977 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfhkk/must-gather-vnjmk" event={"ID":"c05540c0-c586-4204-988e-1d8e2a23bae6","Type":"ContainerDied","Data":"4bc76f2204eeef43d15897ab141f767eb8e5948f318da858f970be14e6a08e21"} Jan 31 05:10:46 crc kubenswrapper[4827]: I0131 05:10:46.775490 4827 scope.go:117] "RemoveContainer" containerID="4bc76f2204eeef43d15897ab141f767eb8e5948f318da858f970be14e6a08e21" Jan 31 05:10:46 crc kubenswrapper[4827]: I0131 05:10:46.842766 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tfhkk_must-gather-vnjmk_c05540c0-c586-4204-988e-1d8e2a23bae6/gather/0.log" Jan 31 05:10:54 crc kubenswrapper[4827]: I0131 05:10:54.718466 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tfhkk/must-gather-vnjmk"] Jan 31 05:10:54 crc kubenswrapper[4827]: I0131 05:10:54.719222 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tfhkk/must-gather-vnjmk" podUID="c05540c0-c586-4204-988e-1d8e2a23bae6" containerName="copy" containerID="cri-o://7ce5705fbdd2a4c3543efe9d4a26a6a28159307dd987f002e3f9cbfacf0626eb" gracePeriod=2 Jan 31 05:10:54 crc kubenswrapper[4827]: I0131 05:10:54.728716 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tfhkk/must-gather-vnjmk"] Jan 31 05:10:54 crc kubenswrapper[4827]: I0131 05:10:54.857016 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tfhkk_must-gather-vnjmk_c05540c0-c586-4204-988e-1d8e2a23bae6/copy/0.log" Jan 31 05:10:54 crc kubenswrapper[4827]: I0131 05:10:54.857525 4827 generic.go:334] "Generic (PLEG): container finished" podID="c05540c0-c586-4204-988e-1d8e2a23bae6" containerID="7ce5705fbdd2a4c3543efe9d4a26a6a28159307dd987f002e3f9cbfacf0626eb" exitCode=143 Jan 31 05:10:55 crc kubenswrapper[4827]: I0131 05:10:55.139441 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tfhkk_must-gather-vnjmk_c05540c0-c586-4204-988e-1d8e2a23bae6/copy/0.log" Jan 31 05:10:55 crc kubenswrapper[4827]: I0131 05:10:55.140122 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfhkk/must-gather-vnjmk" Jan 31 05:10:55 crc kubenswrapper[4827]: I0131 05:10:55.230128 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqfwd\" (UniqueName: \"kubernetes.io/projected/c05540c0-c586-4204-988e-1d8e2a23bae6-kube-api-access-kqfwd\") pod \"c05540c0-c586-4204-988e-1d8e2a23bae6\" (UID: \"c05540c0-c586-4204-988e-1d8e2a23bae6\") " Jan 31 05:10:55 crc kubenswrapper[4827]: I0131 05:10:55.230276 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c05540c0-c586-4204-988e-1d8e2a23bae6-must-gather-output\") pod \"c05540c0-c586-4204-988e-1d8e2a23bae6\" (UID: \"c05540c0-c586-4204-988e-1d8e2a23bae6\") " Jan 31 05:10:55 crc kubenswrapper[4827]: I0131 05:10:55.252006 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c05540c0-c586-4204-988e-1d8e2a23bae6-kube-api-access-kqfwd" (OuterVolumeSpecName: "kube-api-access-kqfwd") pod "c05540c0-c586-4204-988e-1d8e2a23bae6" (UID: "c05540c0-c586-4204-988e-1d8e2a23bae6"). InnerVolumeSpecName "kube-api-access-kqfwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:10:55 crc kubenswrapper[4827]: I0131 05:10:55.340387 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqfwd\" (UniqueName: \"kubernetes.io/projected/c05540c0-c586-4204-988e-1d8e2a23bae6-kube-api-access-kqfwd\") on node \"crc\" DevicePath \"\"" Jan 31 05:10:55 crc kubenswrapper[4827]: I0131 05:10:55.414756 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c05540c0-c586-4204-988e-1d8e2a23bae6-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c05540c0-c586-4204-988e-1d8e2a23bae6" (UID: "c05540c0-c586-4204-988e-1d8e2a23bae6"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:10:55 crc kubenswrapper[4827]: I0131 05:10:55.442440 4827 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c05540c0-c586-4204-988e-1d8e2a23bae6-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 05:10:55 crc kubenswrapper[4827]: I0131 05:10:55.866337 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tfhkk_must-gather-vnjmk_c05540c0-c586-4204-988e-1d8e2a23bae6/copy/0.log" Jan 31 05:10:55 crc kubenswrapper[4827]: I0131 05:10:55.866698 4827 scope.go:117] "RemoveContainer" containerID="7ce5705fbdd2a4c3543efe9d4a26a6a28159307dd987f002e3f9cbfacf0626eb" Jan 31 05:10:55 crc kubenswrapper[4827]: I0131 05:10:55.866842 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfhkk/must-gather-vnjmk" Jan 31 05:10:55 crc kubenswrapper[4827]: I0131 05:10:55.892575 4827 scope.go:117] "RemoveContainer" containerID="4bc76f2204eeef43d15897ab141f767eb8e5948f318da858f970be14e6a08e21" Jan 31 05:10:56 crc kubenswrapper[4827]: I0131 05:10:56.122305 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c05540c0-c586-4204-988e-1d8e2a23bae6" path="/var/lib/kubelet/pods/c05540c0-c586-4204-988e-1d8e2a23bae6/volumes" Jan 31 05:10:57 crc kubenswrapper[4827]: I0131 05:10:57.110280 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:10:57 crc kubenswrapper[4827]: E0131 05:10:57.110707 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:11:12 crc kubenswrapper[4827]: I0131 05:11:12.111754 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:11:12 crc kubenswrapper[4827]: E0131 05:11:12.112870 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:11:23 crc kubenswrapper[4827]: I0131 05:11:23.110511 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:11:23 crc kubenswrapper[4827]: E0131 05:11:23.112047 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:11:38 crc kubenswrapper[4827]: I0131 05:11:38.110856 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:11:38 crc kubenswrapper[4827]: E0131 05:11:38.111954 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:11:49 crc kubenswrapper[4827]: I0131 05:11:49.111149 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:11:49 crc kubenswrapper[4827]: E0131 05:11:49.112303 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:12:01 crc kubenswrapper[4827]: I0131 05:12:01.110089 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:12:01 crc kubenswrapper[4827]: E0131 05:12:01.111202 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:12:12 crc kubenswrapper[4827]: I0131 05:12:12.111341 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:12:12 crc kubenswrapper[4827]: E0131 05:12:12.112372 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:12:22 crc kubenswrapper[4827]: I0131 05:12:22.461134 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r8l7v"] Jan 31 05:12:22 crc kubenswrapper[4827]: E0131 05:12:22.462040 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19fb7a9a-274c-43ca-8fe7-89946f3ae3c0" containerName="registry-server" Jan 31 05:12:22 crc kubenswrapper[4827]: I0131 05:12:22.462054 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="19fb7a9a-274c-43ca-8fe7-89946f3ae3c0" containerName="registry-server" Jan 31 05:12:22 crc kubenswrapper[4827]: E0131 05:12:22.462075 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c05540c0-c586-4204-988e-1d8e2a23bae6" containerName="gather" Jan 31 05:12:22 crc kubenswrapper[4827]: I0131 05:12:22.462081 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c05540c0-c586-4204-988e-1d8e2a23bae6" containerName="gather" Jan 31 05:12:22 crc kubenswrapper[4827]: E0131 05:12:22.462098 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c05540c0-c586-4204-988e-1d8e2a23bae6" containerName="copy" Jan 31 05:12:22 crc kubenswrapper[4827]: I0131 05:12:22.462104 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c05540c0-c586-4204-988e-1d8e2a23bae6" containerName="copy" Jan 31 05:12:22 crc kubenswrapper[4827]: E0131 05:12:22.462113 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19fb7a9a-274c-43ca-8fe7-89946f3ae3c0" containerName="extract-utilities" Jan 31 05:12:22 crc kubenswrapper[4827]: I0131 05:12:22.462120 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="19fb7a9a-274c-43ca-8fe7-89946f3ae3c0" containerName="extract-utilities" Jan 31 05:12:22 crc kubenswrapper[4827]: E0131 05:12:22.462131 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19fb7a9a-274c-43ca-8fe7-89946f3ae3c0" containerName="extract-content" Jan 31 05:12:22 crc kubenswrapper[4827]: I0131 05:12:22.462137 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="19fb7a9a-274c-43ca-8fe7-89946f3ae3c0" containerName="extract-content" Jan 31 05:12:22 crc kubenswrapper[4827]: I0131 05:12:22.462308 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="19fb7a9a-274c-43ca-8fe7-89946f3ae3c0" containerName="registry-server" Jan 31 05:12:22 crc kubenswrapper[4827]: I0131 05:12:22.462319 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c05540c0-c586-4204-988e-1d8e2a23bae6" containerName="gather" Jan 31 05:12:22 crc kubenswrapper[4827]: I0131 05:12:22.462329 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c05540c0-c586-4204-988e-1d8e2a23bae6" containerName="copy" Jan 31 05:12:22 crc kubenswrapper[4827]: I0131 05:12:22.463645 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r8l7v" Jan 31 05:12:22 crc kubenswrapper[4827]: I0131 05:12:22.485167 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c24a355e-3da5-4ea8-afd2-c892e3fd0b1c-catalog-content\") pod \"community-operators-r8l7v\" (UID: \"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c\") " pod="openshift-marketplace/community-operators-r8l7v" Jan 31 05:12:22 crc kubenswrapper[4827]: I0131 05:12:22.485448 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr92q\" (UniqueName: \"kubernetes.io/projected/c24a355e-3da5-4ea8-afd2-c892e3fd0b1c-kube-api-access-zr92q\") pod \"community-operators-r8l7v\" (UID: \"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c\") " pod="openshift-marketplace/community-operators-r8l7v" Jan 31 05:12:22 crc kubenswrapper[4827]: I0131 05:12:22.485856 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c24a355e-3da5-4ea8-afd2-c892e3fd0b1c-utilities\") pod \"community-operators-r8l7v\" (UID: \"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c\") " pod="openshift-marketplace/community-operators-r8l7v" Jan 31 05:12:22 crc kubenswrapper[4827]: I0131 05:12:22.490849 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r8l7v"] Jan 31 05:12:22 crc kubenswrapper[4827]: I0131 05:12:22.586714 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c24a355e-3da5-4ea8-afd2-c892e3fd0b1c-utilities\") pod \"community-operators-r8l7v\" (UID: \"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c\") " pod="openshift-marketplace/community-operators-r8l7v" Jan 31 05:12:22 crc kubenswrapper[4827]: I0131 05:12:22.586806 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c24a355e-3da5-4ea8-afd2-c892e3fd0b1c-catalog-content\") pod \"community-operators-r8l7v\" (UID: \"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c\") " pod="openshift-marketplace/community-operators-r8l7v" Jan 31 05:12:22 crc kubenswrapper[4827]: I0131 05:12:22.586860 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr92q\" (UniqueName: \"kubernetes.io/projected/c24a355e-3da5-4ea8-afd2-c892e3fd0b1c-kube-api-access-zr92q\") pod \"community-operators-r8l7v\" (UID: \"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c\") " pod="openshift-marketplace/community-operators-r8l7v" Jan 31 05:12:22 crc kubenswrapper[4827]: I0131 05:12:22.587553 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c24a355e-3da5-4ea8-afd2-c892e3fd0b1c-utilities\") pod \"community-operators-r8l7v\" (UID: \"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c\") " pod="openshift-marketplace/community-operators-r8l7v" Jan 31 05:12:22 crc kubenswrapper[4827]: I0131 05:12:22.587776 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c24a355e-3da5-4ea8-afd2-c892e3fd0b1c-catalog-content\") pod \"community-operators-r8l7v\" (UID: \"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c\") " pod="openshift-marketplace/community-operators-r8l7v" Jan 31 05:12:22 crc kubenswrapper[4827]: I0131 05:12:22.989178 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr92q\" (UniqueName: \"kubernetes.io/projected/c24a355e-3da5-4ea8-afd2-c892e3fd0b1c-kube-api-access-zr92q\") pod \"community-operators-r8l7v\" (UID: \"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c\") " pod="openshift-marketplace/community-operators-r8l7v" Jan 31 05:12:23 crc kubenswrapper[4827]: I0131 05:12:23.089370 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r8l7v" Jan 31 05:12:23 crc kubenswrapper[4827]: I0131 05:12:23.606952 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r8l7v"] Jan 31 05:12:23 crc kubenswrapper[4827]: I0131 05:12:23.809084 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r8l7v" event={"ID":"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c","Type":"ContainerStarted","Data":"e21ddae62459f64b3fc5326864f36172b53fc3d647573f7bd0056256576deb5b"} Jan 31 05:12:24 crc kubenswrapper[4827]: I0131 05:12:24.109808 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:12:24 crc kubenswrapper[4827]: E0131 05:12:24.110145 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:12:24 crc kubenswrapper[4827]: I0131 05:12:24.825653 4827 generic.go:334] "Generic (PLEG): container finished" podID="c24a355e-3da5-4ea8-afd2-c892e3fd0b1c" containerID="4ac3574413be31737a28eadc40a4900273cc0939c55589ab1eccd69127b5bcb5" exitCode=0 Jan 31 05:12:24 crc kubenswrapper[4827]: I0131 05:12:24.825706 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r8l7v" event={"ID":"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c","Type":"ContainerDied","Data":"4ac3574413be31737a28eadc40a4900273cc0939c55589ab1eccd69127b5bcb5"} Jan 31 05:12:25 crc kubenswrapper[4827]: I0131 05:12:25.836955 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r8l7v" event={"ID":"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c","Type":"ContainerStarted","Data":"ffad3767c4645c51666832d6884c5f29af22021b548160d5b0b1dd3a26620666"} Jan 31 05:12:26 crc kubenswrapper[4827]: I0131 05:12:26.860952 4827 generic.go:334] "Generic (PLEG): container finished" podID="c24a355e-3da5-4ea8-afd2-c892e3fd0b1c" containerID="ffad3767c4645c51666832d6884c5f29af22021b548160d5b0b1dd3a26620666" exitCode=0 Jan 31 05:12:26 crc kubenswrapper[4827]: I0131 05:12:26.861011 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r8l7v" event={"ID":"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c","Type":"ContainerDied","Data":"ffad3767c4645c51666832d6884c5f29af22021b548160d5b0b1dd3a26620666"} Jan 31 05:12:27 crc kubenswrapper[4827]: I0131 05:12:27.872047 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r8l7v" event={"ID":"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c","Type":"ContainerStarted","Data":"e4c9f5174a9f6083d62208f1ec923d835bc8e05e215427db6f1a7386c6899059"} Jan 31 05:12:27 crc kubenswrapper[4827]: I0131 05:12:27.899478 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r8l7v" podStartSLOduration=3.309467277 podStartE2EDuration="5.899455841s" podCreationTimestamp="2026-01-31 05:12:22 +0000 UTC" firstStartedPulling="2026-01-31 05:12:24.828697441 +0000 UTC m=+5137.515777900" lastFinishedPulling="2026-01-31 05:12:27.418686005 +0000 UTC m=+5140.105766464" observedRunningTime="2026-01-31 05:12:27.888671156 +0000 UTC m=+5140.575751635" watchObservedRunningTime="2026-01-31 05:12:27.899455841 +0000 UTC m=+5140.586536290" Jan 31 05:12:33 crc kubenswrapper[4827]: I0131 05:12:33.089537 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r8l7v" Jan 31 05:12:33 crc kubenswrapper[4827]: I0131 05:12:33.090295 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r8l7v" Jan 31 05:12:33 crc kubenswrapper[4827]: I0131 05:12:33.159978 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r8l7v" Jan 31 05:12:34 crc kubenswrapper[4827]: I0131 05:12:34.642229 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r8l7v" Jan 31 05:12:34 crc kubenswrapper[4827]: I0131 05:12:34.699311 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r8l7v"] Jan 31 05:12:35 crc kubenswrapper[4827]: I0131 05:12:35.110026 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:12:35 crc kubenswrapper[4827]: E0131 05:12:35.110489 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:12:35 crc kubenswrapper[4827]: I0131 05:12:35.950029 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r8l7v" podUID="c24a355e-3da5-4ea8-afd2-c892e3fd0b1c" containerName="registry-server" containerID="cri-o://e4c9f5174a9f6083d62208f1ec923d835bc8e05e215427db6f1a7386c6899059" gracePeriod=2 Jan 31 05:12:36 crc kubenswrapper[4827]: I0131 05:12:36.963783 4827 generic.go:334] "Generic (PLEG): container finished" podID="c24a355e-3da5-4ea8-afd2-c892e3fd0b1c" containerID="e4c9f5174a9f6083d62208f1ec923d835bc8e05e215427db6f1a7386c6899059" exitCode=0 Jan 31 05:12:36 crc kubenswrapper[4827]: I0131 05:12:36.963873 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r8l7v" event={"ID":"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c","Type":"ContainerDied","Data":"e4c9f5174a9f6083d62208f1ec923d835bc8e05e215427db6f1a7386c6899059"} Jan 31 05:12:37 crc kubenswrapper[4827]: I0131 05:12:37.181432 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r8l7v" Jan 31 05:12:37 crc kubenswrapper[4827]: I0131 05:12:37.281410 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr92q\" (UniqueName: \"kubernetes.io/projected/c24a355e-3da5-4ea8-afd2-c892e3fd0b1c-kube-api-access-zr92q\") pod \"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c\" (UID: \"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c\") " Jan 31 05:12:37 crc kubenswrapper[4827]: I0131 05:12:37.281612 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c24a355e-3da5-4ea8-afd2-c892e3fd0b1c-catalog-content\") pod \"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c\" (UID: \"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c\") " Jan 31 05:12:37 crc kubenswrapper[4827]: I0131 05:12:37.281722 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c24a355e-3da5-4ea8-afd2-c892e3fd0b1c-utilities\") pod \"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c\" (UID: \"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c\") " Jan 31 05:12:37 crc kubenswrapper[4827]: I0131 05:12:37.282997 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c24a355e-3da5-4ea8-afd2-c892e3fd0b1c-utilities" (OuterVolumeSpecName: "utilities") pod "c24a355e-3da5-4ea8-afd2-c892e3fd0b1c" (UID: "c24a355e-3da5-4ea8-afd2-c892e3fd0b1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:12:37 crc kubenswrapper[4827]: I0131 05:12:37.298687 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c24a355e-3da5-4ea8-afd2-c892e3fd0b1c-kube-api-access-zr92q" (OuterVolumeSpecName: "kube-api-access-zr92q") pod "c24a355e-3da5-4ea8-afd2-c892e3fd0b1c" (UID: "c24a355e-3da5-4ea8-afd2-c892e3fd0b1c"). InnerVolumeSpecName "kube-api-access-zr92q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:12:37 crc kubenswrapper[4827]: I0131 05:12:37.334746 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c24a355e-3da5-4ea8-afd2-c892e3fd0b1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c24a355e-3da5-4ea8-afd2-c892e3fd0b1c" (UID: "c24a355e-3da5-4ea8-afd2-c892e3fd0b1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:12:37 crc kubenswrapper[4827]: I0131 05:12:37.384691 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c24a355e-3da5-4ea8-afd2-c892e3fd0b1c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:12:37 crc kubenswrapper[4827]: I0131 05:12:37.384727 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c24a355e-3da5-4ea8-afd2-c892e3fd0b1c-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:12:37 crc kubenswrapper[4827]: I0131 05:12:37.384736 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zr92q\" (UniqueName: \"kubernetes.io/projected/c24a355e-3da5-4ea8-afd2-c892e3fd0b1c-kube-api-access-zr92q\") on node \"crc\" DevicePath \"\"" Jan 31 05:12:37 crc kubenswrapper[4827]: I0131 05:12:37.979161 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r8l7v" event={"ID":"c24a355e-3da5-4ea8-afd2-c892e3fd0b1c","Type":"ContainerDied","Data":"e21ddae62459f64b3fc5326864f36172b53fc3d647573f7bd0056256576deb5b"} Jan 31 05:12:37 crc kubenswrapper[4827]: I0131 05:12:37.979479 4827 scope.go:117] "RemoveContainer" containerID="e4c9f5174a9f6083d62208f1ec923d835bc8e05e215427db6f1a7386c6899059" Jan 31 05:12:37 crc kubenswrapper[4827]: I0131 05:12:37.979230 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r8l7v" Jan 31 05:12:38 crc kubenswrapper[4827]: I0131 05:12:38.020787 4827 scope.go:117] "RemoveContainer" containerID="ffad3767c4645c51666832d6884c5f29af22021b548160d5b0b1dd3a26620666" Jan 31 05:12:38 crc kubenswrapper[4827]: I0131 05:12:38.035326 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r8l7v"] Jan 31 05:12:38 crc kubenswrapper[4827]: I0131 05:12:38.044421 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r8l7v"] Jan 31 05:12:38 crc kubenswrapper[4827]: I0131 05:12:38.046987 4827 scope.go:117] "RemoveContainer" containerID="4ac3574413be31737a28eadc40a4900273cc0939c55589ab1eccd69127b5bcb5" Jan 31 05:12:38 crc kubenswrapper[4827]: I0131 05:12:38.120483 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c24a355e-3da5-4ea8-afd2-c892e3fd0b1c" path="/var/lib/kubelet/pods/c24a355e-3da5-4ea8-afd2-c892e3fd0b1c/volumes" Jan 31 05:12:49 crc kubenswrapper[4827]: I0131 05:12:49.110384 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:12:49 crc kubenswrapper[4827]: E0131 05:12:49.111190 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:13:02 crc kubenswrapper[4827]: I0131 05:13:02.110328 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:13:02 crc kubenswrapper[4827]: E0131 05:13:02.113345 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:13:13 crc kubenswrapper[4827]: I0131 05:13:13.110753 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:13:13 crc kubenswrapper[4827]: E0131 05:13:13.112121 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:13:26 crc kubenswrapper[4827]: I0131 05:13:26.110963 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:13:26 crc kubenswrapper[4827]: E0131 05:13:26.111999 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:13:38 crc kubenswrapper[4827]: I0131 05:13:38.125781 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:13:38 crc kubenswrapper[4827]: E0131 05:13:38.126815 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:13:51 crc kubenswrapper[4827]: I0131 05:13:51.109983 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:13:51 crc kubenswrapper[4827]: E0131 05:13:51.111243 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:13:58 crc kubenswrapper[4827]: I0131 05:13:58.048759 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gr99w/must-gather-vf58c"] Jan 31 05:13:58 crc kubenswrapper[4827]: E0131 05:13:58.049788 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c24a355e-3da5-4ea8-afd2-c892e3fd0b1c" containerName="extract-utilities" Jan 31 05:13:58 crc kubenswrapper[4827]: I0131 05:13:58.049807 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c24a355e-3da5-4ea8-afd2-c892e3fd0b1c" containerName="extract-utilities" Jan 31 05:13:58 crc kubenswrapper[4827]: E0131 05:13:58.049817 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c24a355e-3da5-4ea8-afd2-c892e3fd0b1c" containerName="registry-server" Jan 31 05:13:58 crc kubenswrapper[4827]: I0131 05:13:58.049825 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c24a355e-3da5-4ea8-afd2-c892e3fd0b1c" containerName="registry-server" Jan 31 05:13:58 crc kubenswrapper[4827]: E0131 05:13:58.049869 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c24a355e-3da5-4ea8-afd2-c892e3fd0b1c" containerName="extract-content" Jan 31 05:13:58 crc kubenswrapper[4827]: I0131 05:13:58.049880 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c24a355e-3da5-4ea8-afd2-c892e3fd0b1c" containerName="extract-content" Jan 31 05:13:58 crc kubenswrapper[4827]: I0131 05:13:58.050110 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c24a355e-3da5-4ea8-afd2-c892e3fd0b1c" containerName="registry-server" Jan 31 05:13:58 crc kubenswrapper[4827]: I0131 05:13:58.054198 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr99w/must-gather-vf58c" Jan 31 05:13:58 crc kubenswrapper[4827]: I0131 05:13:58.056686 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-gr99w"/"default-dockercfg-ppb24" Jan 31 05:13:58 crc kubenswrapper[4827]: I0131 05:13:58.059747 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gr99w"/"kube-root-ca.crt" Jan 31 05:13:58 crc kubenswrapper[4827]: I0131 05:13:58.060113 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gr99w"/"openshift-service-ca.crt" Jan 31 05:13:58 crc kubenswrapper[4827]: I0131 05:13:58.075052 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gr99w/must-gather-vf58c"] Jan 31 05:13:58 crc kubenswrapper[4827]: I0131 05:13:58.129122 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/95a35264-a5f4-4eca-930e-5d5504ce5b2a-must-gather-output\") pod \"must-gather-vf58c\" (UID: \"95a35264-a5f4-4eca-930e-5d5504ce5b2a\") " pod="openshift-must-gather-gr99w/must-gather-vf58c" Jan 31 05:13:58 crc kubenswrapper[4827]: I0131 05:13:58.129314 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2p9k\" (UniqueName: \"kubernetes.io/projected/95a35264-a5f4-4eca-930e-5d5504ce5b2a-kube-api-access-z2p9k\") pod \"must-gather-vf58c\" (UID: \"95a35264-a5f4-4eca-930e-5d5504ce5b2a\") " pod="openshift-must-gather-gr99w/must-gather-vf58c" Jan 31 05:13:58 crc kubenswrapper[4827]: I0131 05:13:58.230903 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2p9k\" (UniqueName: \"kubernetes.io/projected/95a35264-a5f4-4eca-930e-5d5504ce5b2a-kube-api-access-z2p9k\") pod \"must-gather-vf58c\" (UID: \"95a35264-a5f4-4eca-930e-5d5504ce5b2a\") " pod="openshift-must-gather-gr99w/must-gather-vf58c" Jan 31 05:13:58 crc kubenswrapper[4827]: I0131 05:13:58.231082 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/95a35264-a5f4-4eca-930e-5d5504ce5b2a-must-gather-output\") pod \"must-gather-vf58c\" (UID: \"95a35264-a5f4-4eca-930e-5d5504ce5b2a\") " pod="openshift-must-gather-gr99w/must-gather-vf58c" Jan 31 05:13:58 crc kubenswrapper[4827]: I0131 05:13:58.231580 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/95a35264-a5f4-4eca-930e-5d5504ce5b2a-must-gather-output\") pod \"must-gather-vf58c\" (UID: \"95a35264-a5f4-4eca-930e-5d5504ce5b2a\") " pod="openshift-must-gather-gr99w/must-gather-vf58c" Jan 31 05:13:58 crc kubenswrapper[4827]: I0131 05:13:58.258004 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2p9k\" (UniqueName: \"kubernetes.io/projected/95a35264-a5f4-4eca-930e-5d5504ce5b2a-kube-api-access-z2p9k\") pod \"must-gather-vf58c\" (UID: \"95a35264-a5f4-4eca-930e-5d5504ce5b2a\") " pod="openshift-must-gather-gr99w/must-gather-vf58c" Jan 31 05:13:58 crc kubenswrapper[4827]: I0131 05:13:58.384969 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr99w/must-gather-vf58c" Jan 31 05:13:58 crc kubenswrapper[4827]: I0131 05:13:58.836627 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gr99w/must-gather-vf58c"] Jan 31 05:13:58 crc kubenswrapper[4827]: I0131 05:13:58.866887 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr99w/must-gather-vf58c" event={"ID":"95a35264-a5f4-4eca-930e-5d5504ce5b2a","Type":"ContainerStarted","Data":"0e01e01dc9447113910c26482bae0aec254937ab7f26e7a6fc0fa2bf49b409c1"} Jan 31 05:13:59 crc kubenswrapper[4827]: I0131 05:13:59.876604 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr99w/must-gather-vf58c" event={"ID":"95a35264-a5f4-4eca-930e-5d5504ce5b2a","Type":"ContainerStarted","Data":"5fd8865039c619c8e7c340ea6f1d025c299eff428cb52570427ca1244fe75593"} Jan 31 05:13:59 crc kubenswrapper[4827]: I0131 05:13:59.877066 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr99w/must-gather-vf58c" event={"ID":"95a35264-a5f4-4eca-930e-5d5504ce5b2a","Type":"ContainerStarted","Data":"4d1f820821e4999d48e6946495ec2fb63be2456e662e61367a4003c165fdbe02"} Jan 31 05:13:59 crc kubenswrapper[4827]: I0131 05:13:59.892584 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gr99w/must-gather-vf58c" podStartSLOduration=1.892564149 podStartE2EDuration="1.892564149s" podCreationTimestamp="2026-01-31 05:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:13:59.889651518 +0000 UTC m=+5232.576732067" watchObservedRunningTime="2026-01-31 05:13:59.892564149 +0000 UTC m=+5232.579644588" Jan 31 05:14:02 crc kubenswrapper[4827]: I0131 05:14:02.793516 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gr99w/crc-debug-mp47g"] Jan 31 05:14:02 crc kubenswrapper[4827]: I0131 05:14:02.796324 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr99w/crc-debug-mp47g" Jan 31 05:14:02 crc kubenswrapper[4827]: I0131 05:14:02.929026 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70b030f8-5b1a-4281-9590-53a7dce659f1-host\") pod \"crc-debug-mp47g\" (UID: \"70b030f8-5b1a-4281-9590-53a7dce659f1\") " pod="openshift-must-gather-gr99w/crc-debug-mp47g" Jan 31 05:14:02 crc kubenswrapper[4827]: I0131 05:14:02.929131 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pltnr\" (UniqueName: \"kubernetes.io/projected/70b030f8-5b1a-4281-9590-53a7dce659f1-kube-api-access-pltnr\") pod \"crc-debug-mp47g\" (UID: \"70b030f8-5b1a-4281-9590-53a7dce659f1\") " pod="openshift-must-gather-gr99w/crc-debug-mp47g" Jan 31 05:14:03 crc kubenswrapper[4827]: I0131 05:14:03.031336 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70b030f8-5b1a-4281-9590-53a7dce659f1-host\") pod \"crc-debug-mp47g\" (UID: \"70b030f8-5b1a-4281-9590-53a7dce659f1\") " pod="openshift-must-gather-gr99w/crc-debug-mp47g" Jan 31 05:14:03 crc kubenswrapper[4827]: I0131 05:14:03.031408 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pltnr\" (UniqueName: \"kubernetes.io/projected/70b030f8-5b1a-4281-9590-53a7dce659f1-kube-api-access-pltnr\") pod \"crc-debug-mp47g\" (UID: \"70b030f8-5b1a-4281-9590-53a7dce659f1\") " pod="openshift-must-gather-gr99w/crc-debug-mp47g" Jan 31 05:14:03 crc kubenswrapper[4827]: I0131 05:14:03.031511 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70b030f8-5b1a-4281-9590-53a7dce659f1-host\") pod \"crc-debug-mp47g\" (UID: \"70b030f8-5b1a-4281-9590-53a7dce659f1\") " pod="openshift-must-gather-gr99w/crc-debug-mp47g" Jan 31 05:14:03 crc kubenswrapper[4827]: I0131 05:14:03.061329 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pltnr\" (UniqueName: \"kubernetes.io/projected/70b030f8-5b1a-4281-9590-53a7dce659f1-kube-api-access-pltnr\") pod \"crc-debug-mp47g\" (UID: \"70b030f8-5b1a-4281-9590-53a7dce659f1\") " pod="openshift-must-gather-gr99w/crc-debug-mp47g" Jan 31 05:14:03 crc kubenswrapper[4827]: I0131 05:14:03.112528 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr99w/crc-debug-mp47g" Jan 31 05:14:03 crc kubenswrapper[4827]: W0131 05:14:03.164376 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70b030f8_5b1a_4281_9590_53a7dce659f1.slice/crio-84c9cccc02dc27a4a1eacfb46aa098bbdc53f32be7b56f1ee85bbb05e0116869 WatchSource:0}: Error finding container 84c9cccc02dc27a4a1eacfb46aa098bbdc53f32be7b56f1ee85bbb05e0116869: Status 404 returned error can't find the container with id 84c9cccc02dc27a4a1eacfb46aa098bbdc53f32be7b56f1ee85bbb05e0116869 Jan 31 05:14:03 crc kubenswrapper[4827]: I0131 05:14:03.910567 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr99w/crc-debug-mp47g" event={"ID":"70b030f8-5b1a-4281-9590-53a7dce659f1","Type":"ContainerStarted","Data":"9d65a7830a85c70db137c3f7fedf84bf78d8528da245eb75f78b9ab28e8ebb45"} Jan 31 05:14:03 crc kubenswrapper[4827]: I0131 05:14:03.911147 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr99w/crc-debug-mp47g" event={"ID":"70b030f8-5b1a-4281-9590-53a7dce659f1","Type":"ContainerStarted","Data":"84c9cccc02dc27a4a1eacfb46aa098bbdc53f32be7b56f1ee85bbb05e0116869"} Jan 31 05:14:05 crc kubenswrapper[4827]: I0131 05:14:05.110377 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:14:05 crc kubenswrapper[4827]: E0131 05:14:05.110786 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:14:19 crc kubenswrapper[4827]: I0131 05:14:19.110352 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:14:19 crc kubenswrapper[4827]: E0131 05:14:19.112973 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:14:34 crc kubenswrapper[4827]: I0131 05:14:34.110056 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:14:34 crc kubenswrapper[4827]: E0131 05:14:34.110897 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:14:40 crc kubenswrapper[4827]: I0131 05:14:40.222300 4827 generic.go:334] "Generic (PLEG): container finished" podID="70b030f8-5b1a-4281-9590-53a7dce659f1" containerID="9d65a7830a85c70db137c3f7fedf84bf78d8528da245eb75f78b9ab28e8ebb45" exitCode=0 Jan 31 05:14:40 crc kubenswrapper[4827]: I0131 05:14:40.222398 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr99w/crc-debug-mp47g" event={"ID":"70b030f8-5b1a-4281-9590-53a7dce659f1","Type":"ContainerDied","Data":"9d65a7830a85c70db137c3f7fedf84bf78d8528da245eb75f78b9ab28e8ebb45"} Jan 31 05:14:41 crc kubenswrapper[4827]: I0131 05:14:41.347515 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr99w/crc-debug-mp47g" Jan 31 05:14:41 crc kubenswrapper[4827]: I0131 05:14:41.388005 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gr99w/crc-debug-mp47g"] Jan 31 05:14:41 crc kubenswrapper[4827]: I0131 05:14:41.398634 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gr99w/crc-debug-mp47g"] Jan 31 05:14:41 crc kubenswrapper[4827]: I0131 05:14:41.431301 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pltnr\" (UniqueName: \"kubernetes.io/projected/70b030f8-5b1a-4281-9590-53a7dce659f1-kube-api-access-pltnr\") pod \"70b030f8-5b1a-4281-9590-53a7dce659f1\" (UID: \"70b030f8-5b1a-4281-9590-53a7dce659f1\") " Jan 31 05:14:41 crc kubenswrapper[4827]: I0131 05:14:41.431348 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70b030f8-5b1a-4281-9590-53a7dce659f1-host\") pod \"70b030f8-5b1a-4281-9590-53a7dce659f1\" (UID: \"70b030f8-5b1a-4281-9590-53a7dce659f1\") " Jan 31 05:14:41 crc kubenswrapper[4827]: I0131 05:14:41.431579 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70b030f8-5b1a-4281-9590-53a7dce659f1-host" (OuterVolumeSpecName: "host") pod "70b030f8-5b1a-4281-9590-53a7dce659f1" (UID: "70b030f8-5b1a-4281-9590-53a7dce659f1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:14:41 crc kubenswrapper[4827]: I0131 05:14:41.431854 4827 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70b030f8-5b1a-4281-9590-53a7dce659f1-host\") on node \"crc\" DevicePath \"\"" Jan 31 05:14:41 crc kubenswrapper[4827]: I0131 05:14:41.445682 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70b030f8-5b1a-4281-9590-53a7dce659f1-kube-api-access-pltnr" (OuterVolumeSpecName: "kube-api-access-pltnr") pod "70b030f8-5b1a-4281-9590-53a7dce659f1" (UID: "70b030f8-5b1a-4281-9590-53a7dce659f1"). InnerVolumeSpecName "kube-api-access-pltnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:14:41 crc kubenswrapper[4827]: I0131 05:14:41.535979 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pltnr\" (UniqueName: \"kubernetes.io/projected/70b030f8-5b1a-4281-9590-53a7dce659f1-kube-api-access-pltnr\") on node \"crc\" DevicePath \"\"" Jan 31 05:14:42 crc kubenswrapper[4827]: I0131 05:14:42.123302 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70b030f8-5b1a-4281-9590-53a7dce659f1" path="/var/lib/kubelet/pods/70b030f8-5b1a-4281-9590-53a7dce659f1/volumes" Jan 31 05:14:42 crc kubenswrapper[4827]: I0131 05:14:42.240960 4827 scope.go:117] "RemoveContainer" containerID="9d65a7830a85c70db137c3f7fedf84bf78d8528da245eb75f78b9ab28e8ebb45" Jan 31 05:14:42 crc kubenswrapper[4827]: I0131 05:14:42.241049 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr99w/crc-debug-mp47g" Jan 31 05:14:42 crc kubenswrapper[4827]: I0131 05:14:42.620136 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gr99w/crc-debug-lklsc"] Jan 31 05:14:42 crc kubenswrapper[4827]: E0131 05:14:42.620834 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70b030f8-5b1a-4281-9590-53a7dce659f1" containerName="container-00" Jan 31 05:14:42 crc kubenswrapper[4827]: I0131 05:14:42.620846 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="70b030f8-5b1a-4281-9590-53a7dce659f1" containerName="container-00" Jan 31 05:14:42 crc kubenswrapper[4827]: I0131 05:14:42.621034 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="70b030f8-5b1a-4281-9590-53a7dce659f1" containerName="container-00" Jan 31 05:14:42 crc kubenswrapper[4827]: I0131 05:14:42.621601 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr99w/crc-debug-lklsc" Jan 31 05:14:42 crc kubenswrapper[4827]: I0131 05:14:42.661388 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxk4g\" (UniqueName: \"kubernetes.io/projected/01f240df-0544-46d2-9308-00479507adc0-kube-api-access-bxk4g\") pod \"crc-debug-lklsc\" (UID: \"01f240df-0544-46d2-9308-00479507adc0\") " pod="openshift-must-gather-gr99w/crc-debug-lklsc" Jan 31 05:14:42 crc kubenswrapper[4827]: I0131 05:14:42.661867 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01f240df-0544-46d2-9308-00479507adc0-host\") pod \"crc-debug-lklsc\" (UID: \"01f240df-0544-46d2-9308-00479507adc0\") " pod="openshift-must-gather-gr99w/crc-debug-lklsc" Jan 31 05:14:42 crc kubenswrapper[4827]: I0131 05:14:42.763324 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxk4g\" (UniqueName: \"kubernetes.io/projected/01f240df-0544-46d2-9308-00479507adc0-kube-api-access-bxk4g\") pod \"crc-debug-lklsc\" (UID: \"01f240df-0544-46d2-9308-00479507adc0\") " pod="openshift-must-gather-gr99w/crc-debug-lklsc" Jan 31 05:14:42 crc kubenswrapper[4827]: I0131 05:14:42.763796 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01f240df-0544-46d2-9308-00479507adc0-host\") pod \"crc-debug-lklsc\" (UID: \"01f240df-0544-46d2-9308-00479507adc0\") " pod="openshift-must-gather-gr99w/crc-debug-lklsc" Jan 31 05:14:42 crc kubenswrapper[4827]: I0131 05:14:42.763971 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01f240df-0544-46d2-9308-00479507adc0-host\") pod \"crc-debug-lklsc\" (UID: \"01f240df-0544-46d2-9308-00479507adc0\") " pod="openshift-must-gather-gr99w/crc-debug-lklsc" Jan 31 05:14:42 crc kubenswrapper[4827]: I0131 05:14:42.798760 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxk4g\" (UniqueName: \"kubernetes.io/projected/01f240df-0544-46d2-9308-00479507adc0-kube-api-access-bxk4g\") pod \"crc-debug-lklsc\" (UID: \"01f240df-0544-46d2-9308-00479507adc0\") " pod="openshift-must-gather-gr99w/crc-debug-lklsc" Jan 31 05:14:42 crc kubenswrapper[4827]: I0131 05:14:42.937052 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr99w/crc-debug-lklsc" Jan 31 05:14:43 crc kubenswrapper[4827]: I0131 05:14:43.256213 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr99w/crc-debug-lklsc" event={"ID":"01f240df-0544-46d2-9308-00479507adc0","Type":"ContainerStarted","Data":"93f41253923621f78db3ec0100b4e9de5eb5a3c6a09f53e0f13b961f35900791"} Jan 31 05:14:44 crc kubenswrapper[4827]: I0131 05:14:44.266998 4827 generic.go:334] "Generic (PLEG): container finished" podID="01f240df-0544-46d2-9308-00479507adc0" containerID="fb6de427c7a29e92fe440b6dfa0e57a186b2dbcd19a430b650c51849dcc4ff14" exitCode=0 Jan 31 05:14:44 crc kubenswrapper[4827]: I0131 05:14:44.267106 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr99w/crc-debug-lklsc" event={"ID":"01f240df-0544-46d2-9308-00479507adc0","Type":"ContainerDied","Data":"fb6de427c7a29e92fe440b6dfa0e57a186b2dbcd19a430b650c51849dcc4ff14"} Jan 31 05:14:45 crc kubenswrapper[4827]: I0131 05:14:45.616410 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr99w/crc-debug-lklsc" Jan 31 05:14:45 crc kubenswrapper[4827]: I0131 05:14:45.721611 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxk4g\" (UniqueName: \"kubernetes.io/projected/01f240df-0544-46d2-9308-00479507adc0-kube-api-access-bxk4g\") pod \"01f240df-0544-46d2-9308-00479507adc0\" (UID: \"01f240df-0544-46d2-9308-00479507adc0\") " Jan 31 05:14:45 crc kubenswrapper[4827]: I0131 05:14:45.721774 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01f240df-0544-46d2-9308-00479507adc0-host\") pod \"01f240df-0544-46d2-9308-00479507adc0\" (UID: \"01f240df-0544-46d2-9308-00479507adc0\") " Jan 31 05:14:45 crc kubenswrapper[4827]: I0131 05:14:45.722442 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f240df-0544-46d2-9308-00479507adc0-host" (OuterVolumeSpecName: "host") pod "01f240df-0544-46d2-9308-00479507adc0" (UID: "01f240df-0544-46d2-9308-00479507adc0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:14:45 crc kubenswrapper[4827]: I0131 05:14:45.730450 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f240df-0544-46d2-9308-00479507adc0-kube-api-access-bxk4g" (OuterVolumeSpecName: "kube-api-access-bxk4g") pod "01f240df-0544-46d2-9308-00479507adc0" (UID: "01f240df-0544-46d2-9308-00479507adc0"). InnerVolumeSpecName "kube-api-access-bxk4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:14:45 crc kubenswrapper[4827]: I0131 05:14:45.823705 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxk4g\" (UniqueName: \"kubernetes.io/projected/01f240df-0544-46d2-9308-00479507adc0-kube-api-access-bxk4g\") on node \"crc\" DevicePath \"\"" Jan 31 05:14:45 crc kubenswrapper[4827]: I0131 05:14:45.823734 4827 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01f240df-0544-46d2-9308-00479507adc0-host\") on node \"crc\" DevicePath \"\"" Jan 31 05:14:46 crc kubenswrapper[4827]: I0131 05:14:46.111869 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:14:46 crc kubenswrapper[4827]: E0131 05:14:46.112185 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:14:46 crc kubenswrapper[4827]: I0131 05:14:46.290780 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr99w/crc-debug-lklsc" event={"ID":"01f240df-0544-46d2-9308-00479507adc0","Type":"ContainerDied","Data":"93f41253923621f78db3ec0100b4e9de5eb5a3c6a09f53e0f13b961f35900791"} Jan 31 05:14:46 crc kubenswrapper[4827]: I0131 05:14:46.291208 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93f41253923621f78db3ec0100b4e9de5eb5a3c6a09f53e0f13b961f35900791" Jan 31 05:14:46 crc kubenswrapper[4827]: I0131 05:14:46.290956 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr99w/crc-debug-lklsc" Jan 31 05:14:46 crc kubenswrapper[4827]: I0131 05:14:46.379129 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gr99w/crc-debug-lklsc"] Jan 31 05:14:46 crc kubenswrapper[4827]: I0131 05:14:46.391699 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gr99w/crc-debug-lklsc"] Jan 31 05:14:47 crc kubenswrapper[4827]: I0131 05:14:47.645972 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gr99w/crc-debug-5qb5p"] Jan 31 05:14:47 crc kubenswrapper[4827]: E0131 05:14:47.646350 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f240df-0544-46d2-9308-00479507adc0" containerName="container-00" Jan 31 05:14:47 crc kubenswrapper[4827]: I0131 05:14:47.646362 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f240df-0544-46d2-9308-00479507adc0" containerName="container-00" Jan 31 05:14:47 crc kubenswrapper[4827]: I0131 05:14:47.646541 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="01f240df-0544-46d2-9308-00479507adc0" containerName="container-00" Jan 31 05:14:47 crc kubenswrapper[4827]: I0131 05:14:47.647178 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr99w/crc-debug-5qb5p" Jan 31 05:14:47 crc kubenswrapper[4827]: I0131 05:14:47.765870 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46qfw\" (UniqueName: \"kubernetes.io/projected/c4db3c3c-4e86-4020-8a8f-275f8ac562bb-kube-api-access-46qfw\") pod \"crc-debug-5qb5p\" (UID: \"c4db3c3c-4e86-4020-8a8f-275f8ac562bb\") " pod="openshift-must-gather-gr99w/crc-debug-5qb5p" Jan 31 05:14:47 crc kubenswrapper[4827]: I0131 05:14:47.766066 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c4db3c3c-4e86-4020-8a8f-275f8ac562bb-host\") pod \"crc-debug-5qb5p\" (UID: \"c4db3c3c-4e86-4020-8a8f-275f8ac562bb\") " pod="openshift-must-gather-gr99w/crc-debug-5qb5p" Jan 31 05:14:47 crc kubenswrapper[4827]: I0131 05:14:47.868236 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c4db3c3c-4e86-4020-8a8f-275f8ac562bb-host\") pod \"crc-debug-5qb5p\" (UID: \"c4db3c3c-4e86-4020-8a8f-275f8ac562bb\") " pod="openshift-must-gather-gr99w/crc-debug-5qb5p" Jan 31 05:14:47 crc kubenswrapper[4827]: I0131 05:14:47.868366 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c4db3c3c-4e86-4020-8a8f-275f8ac562bb-host\") pod \"crc-debug-5qb5p\" (UID: \"c4db3c3c-4e86-4020-8a8f-275f8ac562bb\") " pod="openshift-must-gather-gr99w/crc-debug-5qb5p" Jan 31 05:14:47 crc kubenswrapper[4827]: I0131 05:14:47.868697 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46qfw\" (UniqueName: \"kubernetes.io/projected/c4db3c3c-4e86-4020-8a8f-275f8ac562bb-kube-api-access-46qfw\") pod \"crc-debug-5qb5p\" (UID: \"c4db3c3c-4e86-4020-8a8f-275f8ac562bb\") " pod="openshift-must-gather-gr99w/crc-debug-5qb5p" Jan 31 05:14:47 crc kubenswrapper[4827]: I0131 05:14:47.887352 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46qfw\" (UniqueName: \"kubernetes.io/projected/c4db3c3c-4e86-4020-8a8f-275f8ac562bb-kube-api-access-46qfw\") pod \"crc-debug-5qb5p\" (UID: \"c4db3c3c-4e86-4020-8a8f-275f8ac562bb\") " pod="openshift-must-gather-gr99w/crc-debug-5qb5p" Jan 31 05:14:47 crc kubenswrapper[4827]: I0131 05:14:47.967111 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr99w/crc-debug-5qb5p" Jan 31 05:14:47 crc kubenswrapper[4827]: W0131 05:14:47.992368 4827 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4db3c3c_4e86_4020_8a8f_275f8ac562bb.slice/crio-c77524c9fc82e6c103ed4aa1b8b65fd5ff34468597104f0501a249440f9a860b WatchSource:0}: Error finding container c77524c9fc82e6c103ed4aa1b8b65fd5ff34468597104f0501a249440f9a860b: Status 404 returned error can't find the container with id c77524c9fc82e6c103ed4aa1b8b65fd5ff34468597104f0501a249440f9a860b Jan 31 05:14:48 crc kubenswrapper[4827]: I0131 05:14:48.122405 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01f240df-0544-46d2-9308-00479507adc0" path="/var/lib/kubelet/pods/01f240df-0544-46d2-9308-00479507adc0/volumes" Jan 31 05:14:48 crc kubenswrapper[4827]: I0131 05:14:48.311751 4827 generic.go:334] "Generic (PLEG): container finished" podID="c4db3c3c-4e86-4020-8a8f-275f8ac562bb" containerID="74953e463fc7d7c831c66ed5c13e6acb571009aa190591b71012707b3a210076" exitCode=0 Jan 31 05:14:48 crc kubenswrapper[4827]: I0131 05:14:48.311819 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr99w/crc-debug-5qb5p" event={"ID":"c4db3c3c-4e86-4020-8a8f-275f8ac562bb","Type":"ContainerDied","Data":"74953e463fc7d7c831c66ed5c13e6acb571009aa190591b71012707b3a210076"} Jan 31 05:14:48 crc kubenswrapper[4827]: I0131 05:14:48.311901 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr99w/crc-debug-5qb5p" event={"ID":"c4db3c3c-4e86-4020-8a8f-275f8ac562bb","Type":"ContainerStarted","Data":"c77524c9fc82e6c103ed4aa1b8b65fd5ff34468597104f0501a249440f9a860b"} Jan 31 05:14:48 crc kubenswrapper[4827]: I0131 05:14:48.355424 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gr99w/crc-debug-5qb5p"] Jan 31 05:14:48 crc kubenswrapper[4827]: I0131 05:14:48.362793 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gr99w/crc-debug-5qb5p"] Jan 31 05:14:49 crc kubenswrapper[4827]: I0131 05:14:49.407902 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr99w/crc-debug-5qb5p" Jan 31 05:14:49 crc kubenswrapper[4827]: I0131 05:14:49.495602 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c4db3c3c-4e86-4020-8a8f-275f8ac562bb-host\") pod \"c4db3c3c-4e86-4020-8a8f-275f8ac562bb\" (UID: \"c4db3c3c-4e86-4020-8a8f-275f8ac562bb\") " Jan 31 05:14:49 crc kubenswrapper[4827]: I0131 05:14:49.495694 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46qfw\" (UniqueName: \"kubernetes.io/projected/c4db3c3c-4e86-4020-8a8f-275f8ac562bb-kube-api-access-46qfw\") pod \"c4db3c3c-4e86-4020-8a8f-275f8ac562bb\" (UID: \"c4db3c3c-4e86-4020-8a8f-275f8ac562bb\") " Jan 31 05:14:49 crc kubenswrapper[4827]: I0131 05:14:49.495739 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4db3c3c-4e86-4020-8a8f-275f8ac562bb-host" (OuterVolumeSpecName: "host") pod "c4db3c3c-4e86-4020-8a8f-275f8ac562bb" (UID: "c4db3c3c-4e86-4020-8a8f-275f8ac562bb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:14:49 crc kubenswrapper[4827]: I0131 05:14:49.496297 4827 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c4db3c3c-4e86-4020-8a8f-275f8ac562bb-host\") on node \"crc\" DevicePath \"\"" Jan 31 05:14:49 crc kubenswrapper[4827]: I0131 05:14:49.501951 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4db3c3c-4e86-4020-8a8f-275f8ac562bb-kube-api-access-46qfw" (OuterVolumeSpecName: "kube-api-access-46qfw") pod "c4db3c3c-4e86-4020-8a8f-275f8ac562bb" (UID: "c4db3c3c-4e86-4020-8a8f-275f8ac562bb"). InnerVolumeSpecName "kube-api-access-46qfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:14:49 crc kubenswrapper[4827]: I0131 05:14:49.598576 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46qfw\" (UniqueName: \"kubernetes.io/projected/c4db3c3c-4e86-4020-8a8f-275f8ac562bb-kube-api-access-46qfw\") on node \"crc\" DevicePath \"\"" Jan 31 05:14:50 crc kubenswrapper[4827]: I0131 05:14:50.129205 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4db3c3c-4e86-4020-8a8f-275f8ac562bb" path="/var/lib/kubelet/pods/c4db3c3c-4e86-4020-8a8f-275f8ac562bb/volumes" Jan 31 05:14:50 crc kubenswrapper[4827]: I0131 05:14:50.331746 4827 scope.go:117] "RemoveContainer" containerID="74953e463fc7d7c831c66ed5c13e6acb571009aa190591b71012707b3a210076" Jan 31 05:14:50 crc kubenswrapper[4827]: I0131 05:14:50.331890 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr99w/crc-debug-5qb5p" Jan 31 05:14:58 crc kubenswrapper[4827]: I0131 05:14:58.115230 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:14:58 crc kubenswrapper[4827]: E0131 05:14:58.116137 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:15:00 crc kubenswrapper[4827]: I0131 05:15:00.156458 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497275-6vlql"] Jan 31 05:15:00 crc kubenswrapper[4827]: E0131 05:15:00.157550 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4db3c3c-4e86-4020-8a8f-275f8ac562bb" containerName="container-00" Jan 31 05:15:00 crc kubenswrapper[4827]: I0131 05:15:00.157568 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4db3c3c-4e86-4020-8a8f-275f8ac562bb" containerName="container-00" Jan 31 05:15:00 crc kubenswrapper[4827]: I0131 05:15:00.157818 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4db3c3c-4e86-4020-8a8f-275f8ac562bb" containerName="container-00" Jan 31 05:15:00 crc kubenswrapper[4827]: I0131 05:15:00.158635 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-6vlql" Jan 31 05:15:00 crc kubenswrapper[4827]: I0131 05:15:00.161827 4827 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 05:15:00 crc kubenswrapper[4827]: I0131 05:15:00.162769 4827 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 05:15:00 crc kubenswrapper[4827]: I0131 05:15:00.180647 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497275-6vlql"] Jan 31 05:15:00 crc kubenswrapper[4827]: I0131 05:15:00.205822 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzxb8\" (UniqueName: \"kubernetes.io/projected/ab8f2153-531f-4737-ae78-82ca24e22b28-kube-api-access-hzxb8\") pod \"collect-profiles-29497275-6vlql\" (UID: \"ab8f2153-531f-4737-ae78-82ca24e22b28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-6vlql" Jan 31 05:15:00 crc kubenswrapper[4827]: I0131 05:15:00.206318 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab8f2153-531f-4737-ae78-82ca24e22b28-secret-volume\") pod \"collect-profiles-29497275-6vlql\" (UID: \"ab8f2153-531f-4737-ae78-82ca24e22b28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-6vlql" Jan 31 05:15:00 crc kubenswrapper[4827]: I0131 05:15:00.206457 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab8f2153-531f-4737-ae78-82ca24e22b28-config-volume\") pod \"collect-profiles-29497275-6vlql\" (UID: \"ab8f2153-531f-4737-ae78-82ca24e22b28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-6vlql" Jan 31 05:15:00 crc kubenswrapper[4827]: I0131 05:15:00.308684 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab8f2153-531f-4737-ae78-82ca24e22b28-secret-volume\") pod \"collect-profiles-29497275-6vlql\" (UID: \"ab8f2153-531f-4737-ae78-82ca24e22b28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-6vlql" Jan 31 05:15:00 crc kubenswrapper[4827]: I0131 05:15:00.308750 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab8f2153-531f-4737-ae78-82ca24e22b28-config-volume\") pod \"collect-profiles-29497275-6vlql\" (UID: \"ab8f2153-531f-4737-ae78-82ca24e22b28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-6vlql" Jan 31 05:15:00 crc kubenswrapper[4827]: I0131 05:15:00.308868 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzxb8\" (UniqueName: \"kubernetes.io/projected/ab8f2153-531f-4737-ae78-82ca24e22b28-kube-api-access-hzxb8\") pod \"collect-profiles-29497275-6vlql\" (UID: \"ab8f2153-531f-4737-ae78-82ca24e22b28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-6vlql" Jan 31 05:15:00 crc kubenswrapper[4827]: I0131 05:15:00.310017 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab8f2153-531f-4737-ae78-82ca24e22b28-config-volume\") pod \"collect-profiles-29497275-6vlql\" (UID: \"ab8f2153-531f-4737-ae78-82ca24e22b28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-6vlql" Jan 31 05:15:00 crc kubenswrapper[4827]: I0131 05:15:00.327727 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab8f2153-531f-4737-ae78-82ca24e22b28-secret-volume\") pod \"collect-profiles-29497275-6vlql\" (UID: \"ab8f2153-531f-4737-ae78-82ca24e22b28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-6vlql" Jan 31 05:15:00 crc kubenswrapper[4827]: I0131 05:15:00.329929 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzxb8\" (UniqueName: \"kubernetes.io/projected/ab8f2153-531f-4737-ae78-82ca24e22b28-kube-api-access-hzxb8\") pod \"collect-profiles-29497275-6vlql\" (UID: \"ab8f2153-531f-4737-ae78-82ca24e22b28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-6vlql" Jan 31 05:15:00 crc kubenswrapper[4827]: I0131 05:15:00.479340 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-6vlql" Jan 31 05:15:00 crc kubenswrapper[4827]: I0131 05:15:00.927181 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497275-6vlql"] Jan 31 05:15:01 crc kubenswrapper[4827]: I0131 05:15:01.430229 4827 generic.go:334] "Generic (PLEG): container finished" podID="ab8f2153-531f-4737-ae78-82ca24e22b28" containerID="bb7d6e6a9a22317b236da1ed727456794437d02ee39485d1f701123237291a15" exitCode=0 Jan 31 05:15:01 crc kubenswrapper[4827]: I0131 05:15:01.430272 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-6vlql" event={"ID":"ab8f2153-531f-4737-ae78-82ca24e22b28","Type":"ContainerDied","Data":"bb7d6e6a9a22317b236da1ed727456794437d02ee39485d1f701123237291a15"} Jan 31 05:15:01 crc kubenswrapper[4827]: I0131 05:15:01.430297 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-6vlql" event={"ID":"ab8f2153-531f-4737-ae78-82ca24e22b28","Type":"ContainerStarted","Data":"f8e54651c46f83e533038e291f898f2c55f00de0c0544931d95e82f9af9ece98"} Jan 31 05:15:02 crc kubenswrapper[4827]: I0131 05:15:02.810317 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-6vlql" Jan 31 05:15:02 crc kubenswrapper[4827]: I0131 05:15:02.856184 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab8f2153-531f-4737-ae78-82ca24e22b28-config-volume\") pod \"ab8f2153-531f-4737-ae78-82ca24e22b28\" (UID: \"ab8f2153-531f-4737-ae78-82ca24e22b28\") " Jan 31 05:15:02 crc kubenswrapper[4827]: I0131 05:15:02.856421 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab8f2153-531f-4737-ae78-82ca24e22b28-secret-volume\") pod \"ab8f2153-531f-4737-ae78-82ca24e22b28\" (UID: \"ab8f2153-531f-4737-ae78-82ca24e22b28\") " Jan 31 05:15:02 crc kubenswrapper[4827]: I0131 05:15:02.856465 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzxb8\" (UniqueName: \"kubernetes.io/projected/ab8f2153-531f-4737-ae78-82ca24e22b28-kube-api-access-hzxb8\") pod \"ab8f2153-531f-4737-ae78-82ca24e22b28\" (UID: \"ab8f2153-531f-4737-ae78-82ca24e22b28\") " Jan 31 05:15:02 crc kubenswrapper[4827]: I0131 05:15:02.857073 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab8f2153-531f-4737-ae78-82ca24e22b28-config-volume" (OuterVolumeSpecName: "config-volume") pod "ab8f2153-531f-4737-ae78-82ca24e22b28" (UID: "ab8f2153-531f-4737-ae78-82ca24e22b28"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:15:02 crc kubenswrapper[4827]: I0131 05:15:02.863014 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8f2153-531f-4737-ae78-82ca24e22b28-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ab8f2153-531f-4737-ae78-82ca24e22b28" (UID: "ab8f2153-531f-4737-ae78-82ca24e22b28"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:15:02 crc kubenswrapper[4827]: I0131 05:15:02.865034 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab8f2153-531f-4737-ae78-82ca24e22b28-kube-api-access-hzxb8" (OuterVolumeSpecName: "kube-api-access-hzxb8") pod "ab8f2153-531f-4737-ae78-82ca24e22b28" (UID: "ab8f2153-531f-4737-ae78-82ca24e22b28"). InnerVolumeSpecName "kube-api-access-hzxb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:15:02 crc kubenswrapper[4827]: I0131 05:15:02.958318 4827 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab8f2153-531f-4737-ae78-82ca24e22b28-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:15:02 crc kubenswrapper[4827]: I0131 05:15:02.958347 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzxb8\" (UniqueName: \"kubernetes.io/projected/ab8f2153-531f-4737-ae78-82ca24e22b28-kube-api-access-hzxb8\") on node \"crc\" DevicePath \"\"" Jan 31 05:15:02 crc kubenswrapper[4827]: I0131 05:15:02.958356 4827 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab8f2153-531f-4737-ae78-82ca24e22b28-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:15:03 crc kubenswrapper[4827]: I0131 05:15:03.447760 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-6vlql" event={"ID":"ab8f2153-531f-4737-ae78-82ca24e22b28","Type":"ContainerDied","Data":"f8e54651c46f83e533038e291f898f2c55f00de0c0544931d95e82f9af9ece98"} Jan 31 05:15:03 crc kubenswrapper[4827]: I0131 05:15:03.448087 4827 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8e54651c46f83e533038e291f898f2c55f00de0c0544931d95e82f9af9ece98" Jan 31 05:15:03 crc kubenswrapper[4827]: I0131 05:15:03.447798 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-6vlql" Jan 31 05:15:03 crc kubenswrapper[4827]: I0131 05:15:03.880695 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497230-4bmhm"] Jan 31 05:15:03 crc kubenswrapper[4827]: I0131 05:15:03.889053 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497230-4bmhm"] Jan 31 05:15:04 crc kubenswrapper[4827]: I0131 05:15:04.123211 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881" path="/var/lib/kubelet/pods/ae9c3544-e5bd-47ab-8cd7-4aadd8e2e881/volumes" Jan 31 05:15:04 crc kubenswrapper[4827]: I0131 05:15:04.215954 4827 scope.go:117] "RemoveContainer" containerID="ca399045f31bac40cc11169cad8734344a05eab29f6c62a4bfd12f4184be8a01" Jan 31 05:15:12 crc kubenswrapper[4827]: I0131 05:15:12.109773 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:15:12 crc kubenswrapper[4827]: E0131 05:15:12.110515 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:15:26 crc kubenswrapper[4827]: I0131 05:15:26.110543 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:15:26 crc kubenswrapper[4827]: I0131 05:15:26.707731 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"b941b80c4a232140971b6ffe079326d4736817375d3ac0d921c89321be0c73e2"} Jan 31 05:15:35 crc kubenswrapper[4827]: I0131 05:15:35.288869 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-74bf46887d-nb5df_d5530571-a0ae-4835-809e-0dab61573e8c/barbican-api/0.log" Jan 31 05:15:35 crc kubenswrapper[4827]: I0131 05:15:35.508784 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-74bf46887d-nb5df_d5530571-a0ae-4835-809e-0dab61573e8c/barbican-api-log/0.log" Jan 31 05:15:35 crc kubenswrapper[4827]: I0131 05:15:35.523612 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-649fbbf9d6-vkg84_ac1aadec-fcb7-428e-9020-e424c393f018/barbican-keystone-listener/0.log" Jan 31 05:15:35 crc kubenswrapper[4827]: I0131 05:15:35.706591 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-649fbbf9d6-vkg84_ac1aadec-fcb7-428e-9020-e424c393f018/barbican-keystone-listener-log/0.log" Jan 31 05:15:35 crc kubenswrapper[4827]: I0131 05:15:35.772041 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7c79fbcb95-qrncz_e83ad5c9-edbb-4764-b932-52810f0f57ac/barbican-worker/0.log" Jan 31 05:15:35 crc kubenswrapper[4827]: I0131 05:15:35.779472 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7c79fbcb95-qrncz_e83ad5c9-edbb-4764-b932-52810f0f57ac/barbican-worker-log/0.log" Jan 31 05:15:35 crc kubenswrapper[4827]: I0131 05:15:35.968781 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-gr5pm_fde30814-9dd3-4c47-b7b2-cda3221d27e6/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:15:35 crc kubenswrapper[4827]: I0131 05:15:35.971906 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8c177172-a833-49fa-8448-419a0891c926/ceilometer-central-agent/0.log" Jan 31 05:15:36 crc kubenswrapper[4827]: I0131 05:15:36.122221 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8c177172-a833-49fa-8448-419a0891c926/ceilometer-notification-agent/0.log" Jan 31 05:15:36 crc kubenswrapper[4827]: I0131 05:15:36.161274 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8c177172-a833-49fa-8448-419a0891c926/sg-core/0.log" Jan 31 05:15:36 crc kubenswrapper[4827]: I0131 05:15:36.197174 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8c177172-a833-49fa-8448-419a0891c926/proxy-httpd/0.log" Jan 31 05:15:36 crc kubenswrapper[4827]: I0131 05:15:36.308428 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-dr2nb_e8b7f56f-cdfd-483b-8759-e869bedfd461/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:15:36 crc kubenswrapper[4827]: I0131 05:15:36.400458 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vnjj9_d1104797-b1ab-4987-9d02-b19197f94eb5/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:15:36 crc kubenswrapper[4827]: I0131 05:15:36.742113 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5dc884e0-8eda-432c-a19f-2f1f4202ed2f/cinder-api/0.log" Jan 31 05:15:36 crc kubenswrapper[4827]: I0131 05:15:36.972063 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_ceec568e-c3e2-4b44-b2f9-b90d9334667f/probe/0.log" Jan 31 05:15:37 crc kubenswrapper[4827]: I0131 05:15:37.264058 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d3a82636-a800-49e2-b3f7-f253d069722c/cinder-scheduler/0.log" Jan 31 05:15:37 crc kubenswrapper[4827]: I0131 05:15:37.433284 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d3a82636-a800-49e2-b3f7-f253d069722c/probe/0.log" Jan 31 05:15:37 crc kubenswrapper[4827]: I0131 05:15:37.473589 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5dc884e0-8eda-432c-a19f-2f1f4202ed2f/cinder-api-log/0.log" Jan 31 05:15:37 crc kubenswrapper[4827]: I0131 05:15:37.719860 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_7c359669-c94b-42d4-9b63-de6d4812e598/probe/0.log" Jan 31 05:15:37 crc kubenswrapper[4827]: I0131 05:15:37.955752 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-dn499_746484a7-e256-43ec-8a25-6d4ef96aa9e0/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:15:38 crc kubenswrapper[4827]: I0131 05:15:38.105226 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_ceec568e-c3e2-4b44-b2f9-b90d9334667f/cinder-backup/0.log" Jan 31 05:15:38 crc kubenswrapper[4827]: I0131 05:15:38.192809 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-lnmq8_5acc6b04-9b48-4dfa-9fbb-9f5520fe7cea/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:15:38 crc kubenswrapper[4827]: I0131 05:15:38.323788 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-b4n6f_8921f642-11e6-4efc-9441-9f3ee68ed074/init/0.log" Jan 31 05:15:38 crc kubenswrapper[4827]: I0131 05:15:38.582960 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-b4n6f_8921f642-11e6-4efc-9441-9f3ee68ed074/init/0.log" Jan 31 05:15:38 crc kubenswrapper[4827]: I0131 05:15:38.690031 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-b4n6f_8921f642-11e6-4efc-9441-9f3ee68ed074/dnsmasq-dns/0.log" Jan 31 05:15:38 crc kubenswrapper[4827]: I0131 05:15:38.818960 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c51b9a5d-c011-422c-8b29-39a9d4355659/glance-log/0.log" Jan 31 05:15:38 crc kubenswrapper[4827]: I0131 05:15:38.835001 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c51b9a5d-c011-422c-8b29-39a9d4355659/glance-httpd/0.log" Jan 31 05:15:38 crc kubenswrapper[4827]: I0131 05:15:38.967052 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_07de9274-858d-4f45-bb4e-f064e62260c8/glance-httpd/0.log" Jan 31 05:15:39 crc kubenswrapper[4827]: I0131 05:15:39.028145 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_07de9274-858d-4f45-bb4e-f064e62260c8/glance-log/0.log" Jan 31 05:15:39 crc kubenswrapper[4827]: I0131 05:15:39.319513 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5dd8c8746d-r25sr_19b64bcf-afad-4b01-8d57-1c1b56bb170f/horizon/0.log" Jan 31 05:15:39 crc kubenswrapper[4827]: I0131 05:15:39.418619 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-967vq_d8ff53f0-3c1e-4bdf-8c0b-e3aca738f080/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:15:39 crc kubenswrapper[4827]: I0131 05:15:39.596439 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5dd8c8746d-r25sr_19b64bcf-afad-4b01-8d57-1c1b56bb170f/horizon-log/0.log" Jan 31 05:15:39 crc kubenswrapper[4827]: I0131 05:15:39.638430 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-x9cnd_83978973-9bf3-4c9a-9689-d47fd0a7aac4/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:15:39 crc kubenswrapper[4827]: I0131 05:15:39.879477 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29497261-5x5xw_40c636c4-b73c-42b5-87f0-dd2d138bf0c1/keystone-cron/0.log" Jan 31 05:15:40 crc kubenswrapper[4827]: I0131 05:15:40.050705 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_41b5f4d3-c8f6-44ee-8edf-dc49c5b9698a/kube-state-metrics/0.log" Jan 31 05:15:40 crc kubenswrapper[4827]: I0131 05:15:40.233042 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-hmq9l_7ae3b57a-e575-49ac-b40f-276d244a1855/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:15:40 crc kubenswrapper[4827]: I0131 05:15:40.469240 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_1a40e96a-328e-449e-b6a8-39c6f6ed0aa2/manila-api-log/0.log" Jan 31 05:15:40 crc kubenswrapper[4827]: I0131 05:15:40.672816 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_1a40e96a-328e-449e-b6a8-39c6f6ed0aa2/manila-api/0.log" Jan 31 05:15:40 crc kubenswrapper[4827]: I0131 05:15:40.690127 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-b7768667c-2kxv5_325f82ae-928b-44ea-bef3-e567002d4814/keystone-api/0.log" Jan 31 05:15:40 crc kubenswrapper[4827]: I0131 05:15:40.738003 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_5cdfe2e4-b566-4369-9354-42494e23eb46/probe/0.log" Jan 31 05:15:40 crc kubenswrapper[4827]: I0131 05:15:40.849390 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_5cdfe2e4-b566-4369-9354-42494e23eb46/manila-scheduler/0.log" Jan 31 05:15:41 crc kubenswrapper[4827]: I0131 05:15:41.535462 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_79a2680f-4176-4fe5-9952-c6e74f2c57d6/probe/0.log" Jan 31 05:15:41 crc kubenswrapper[4827]: I0131 05:15:41.574749 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_79a2680f-4176-4fe5-9952-c6e74f2c57d6/manila-share/0.log" Jan 31 05:15:42 crc kubenswrapper[4827]: I0131 05:15:42.053460 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7969d585-whgv9_aae071c1-75f9-40e2-aa1a-69aa0afba58d/neutron-httpd/0.log" Jan 31 05:15:42 crc kubenswrapper[4827]: I0131 05:15:42.098362 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7969d585-whgv9_aae071c1-75f9-40e2-aa1a-69aa0afba58d/neutron-api/0.log" Jan 31 05:15:42 crc kubenswrapper[4827]: I0131 05:15:42.253315 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ttdms_4aeddc0e-5ddf-42a0-8c89-e840171e5c7b/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:15:43 crc kubenswrapper[4827]: I0131 05:15:43.693070 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_1af38ab0-fbfd-463c-8349-39b3ca0d7f9e/nova-cell0-conductor-conductor/0.log" Jan 31 05:15:43 crc kubenswrapper[4827]: I0131 05:15:43.789417 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_461e6e42-8412-4ab9-aa9a-02b27965961d/nova-api-api/0.log" Jan 31 05:15:43 crc kubenswrapper[4827]: I0131 05:15:43.800027 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_461e6e42-8412-4ab9-aa9a-02b27965961d/nova-api-log/0.log" Jan 31 05:15:44 crc kubenswrapper[4827]: I0131 05:15:44.210019 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_70377532-a0c3-4b3f-abda-b712b33df5e5/nova-cell1-conductor-conductor/0.log" Jan 31 05:15:44 crc kubenswrapper[4827]: I0131 05:15:44.323802 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_61a51a6c-ddc6-4da3-8fcf-4ddb8e50fc14/nova-cell1-novncproxy-novncproxy/0.log" Jan 31 05:15:44 crc kubenswrapper[4827]: I0131 05:15:44.523575 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-rb9w9_84a5bf6e-ec6d-457f-a76f-5566a8f4f2f8/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:15:44 crc kubenswrapper[4827]: I0131 05:15:44.689934 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2bfcb9f2-5385-4257-8277-45f3c3af8582/nova-metadata-log/0.log" Jan 31 05:15:45 crc kubenswrapper[4827]: I0131 05:15:45.197440 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a0d3d60f-16f2-469d-8314-9055bb91a9ce/mysql-bootstrap/0.log" Jan 31 05:15:45 crc kubenswrapper[4827]: I0131 05:15:45.211642 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a1d91b03-3afc-4a12-a489-d1b97ec8d5fe/nova-scheduler-scheduler/0.log" Jan 31 05:15:45 crc kubenswrapper[4827]: I0131 05:15:45.388932 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a0d3d60f-16f2-469d-8314-9055bb91a9ce/mysql-bootstrap/0.log" Jan 31 05:15:45 crc kubenswrapper[4827]: I0131 05:15:45.452427 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a0d3d60f-16f2-469d-8314-9055bb91a9ce/galera/0.log" Jan 31 05:15:45 crc kubenswrapper[4827]: I0131 05:15:45.698438 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f66333b7-3406-4a69-85f5-0806b992a625/mysql-bootstrap/0.log" Jan 31 05:15:46 crc kubenswrapper[4827]: I0131 05:15:46.025801 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f66333b7-3406-4a69-85f5-0806b992a625/mysql-bootstrap/0.log" Jan 31 05:15:46 crc kubenswrapper[4827]: I0131 05:15:46.123914 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f66333b7-3406-4a69-85f5-0806b992a625/galera/0.log" Jan 31 05:15:46 crc kubenswrapper[4827]: I0131 05:15:46.230995 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_7c359669-c94b-42d4-9b63-de6d4812e598/cinder-volume/0.log" Jan 31 05:15:46 crc kubenswrapper[4827]: I0131 05:15:46.321256 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_1e87305f-99c7-4ee4-9813-973bb0a259af/openstackclient/0.log" Jan 31 05:15:46 crc kubenswrapper[4827]: I0131 05:15:46.461169 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-jrrb4_b0a1bcac-47e2-4089-ae1e-98a2dc41d270/ovn-controller/0.log" Jan 31 05:15:46 crc kubenswrapper[4827]: I0131 05:15:46.755869 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mkbw4_b818dd8b-a3fb-46fa-a8b2-784fb2d3169d/openstack-network-exporter/0.log" Jan 31 05:15:46 crc kubenswrapper[4827]: I0131 05:15:46.790937 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vhmf9_35c80c3a-29fe-4992-a421-f5ce7704ff53/ovsdb-server-init/0.log" Jan 31 05:15:46 crc kubenswrapper[4827]: I0131 05:15:46.981689 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vhmf9_35c80c3a-29fe-4992-a421-f5ce7704ff53/ovsdb-server-init/0.log" Jan 31 05:15:47 crc kubenswrapper[4827]: I0131 05:15:47.017750 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vhmf9_35c80c3a-29fe-4992-a421-f5ce7704ff53/ovs-vswitchd/0.log" Jan 31 05:15:47 crc kubenswrapper[4827]: I0131 05:15:47.018147 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2bfcb9f2-5385-4257-8277-45f3c3af8582/nova-metadata-metadata/0.log" Jan 31 05:15:47 crc kubenswrapper[4827]: I0131 05:15:47.070647 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vhmf9_35c80c3a-29fe-4992-a421-f5ce7704ff53/ovsdb-server/0.log" Jan 31 05:15:47 crc kubenswrapper[4827]: I0131 05:15:47.234108 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e08df3ea-bbcb-4a8e-9de0-39b86fa6672d/openstack-network-exporter/0.log" Jan 31 05:15:47 crc kubenswrapper[4827]: I0131 05:15:47.307373 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-52fdv_d2764df8-6296-4363-9a0a-bad8253a8942/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:15:47 crc kubenswrapper[4827]: I0131 05:15:47.402542 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e08df3ea-bbcb-4a8e-9de0-39b86fa6672d/ovn-northd/0.log" Jan 31 05:15:47 crc kubenswrapper[4827]: I0131 05:15:47.475679 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a5961815-808d-4f79-867c-763e2946d47f/openstack-network-exporter/0.log" Jan 31 05:15:47 crc kubenswrapper[4827]: I0131 05:15:47.516844 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a5961815-808d-4f79-867c-763e2946d47f/ovsdbserver-nb/0.log" Jan 31 05:15:47 crc kubenswrapper[4827]: I0131 05:15:47.719347 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1f6f952c-d09b-4584-b231-3fb87e5622fd/openstack-network-exporter/0.log" Jan 31 05:15:47 crc kubenswrapper[4827]: I0131 05:15:47.727962 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1f6f952c-d09b-4584-b231-3fb87e5622fd/ovsdbserver-sb/0.log" Jan 31 05:15:48 crc kubenswrapper[4827]: I0131 05:15:48.068585 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-77b487f776-cjb8n_c95d0b83-4630-47c3-ae7b-dae07d072e38/placement-api/0.log" Jan 31 05:15:48 crc kubenswrapper[4827]: I0131 05:15:48.088091 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-77b487f776-cjb8n_c95d0b83-4630-47c3-ae7b-dae07d072e38/placement-log/0.log" Jan 31 05:15:48 crc kubenswrapper[4827]: I0131 05:15:48.125417 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_92323497-4fa1-43f6-98b0-08fa31c47d3a/setup-container/0.log" Jan 31 05:15:48 crc kubenswrapper[4827]: I0131 05:15:48.259956 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_92323497-4fa1-43f6-98b0-08fa31c47d3a/setup-container/0.log" Jan 31 05:15:48 crc kubenswrapper[4827]: I0131 05:15:48.363384 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_92323497-4fa1-43f6-98b0-08fa31c47d3a/rabbitmq/0.log" Jan 31 05:15:48 crc kubenswrapper[4827]: I0131 05:15:48.430577 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bd61984d-518c-44f2-8a18-8bda81bb6af3/setup-container/0.log" Jan 31 05:15:48 crc kubenswrapper[4827]: I0131 05:15:48.612613 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-pn2hv_65c68493-a927-4bb7-b013-664e9ae73443/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:15:48 crc kubenswrapper[4827]: I0131 05:15:48.628557 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bd61984d-518c-44f2-8a18-8bda81bb6af3/rabbitmq/0.log" Jan 31 05:15:48 crc kubenswrapper[4827]: I0131 05:15:48.670358 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bd61984d-518c-44f2-8a18-8bda81bb6af3/setup-container/0.log" Jan 31 05:15:48 crc kubenswrapper[4827]: I0131 05:15:48.878103 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-scb89_4dec9a4b-08f9-45be-85aa-10bb2a48cdaf/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:15:49 crc kubenswrapper[4827]: I0131 05:15:49.016894 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-9tz78_9c9c5b12-150d-4448-9381-55de889ae8c4/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:15:49 crc kubenswrapper[4827]: I0131 05:15:49.153191 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-42k96_0d5f4456-d112-4cf0-ac82-fc6f693b42ae/ssh-known-hosts-edpm-deployment/0.log" Jan 31 05:15:49 crc kubenswrapper[4827]: I0131 05:15:49.326159 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_9267ff6a-541b-4297-87e4-fb6095cece6e/tempest-tests-tempest-tests-runner/0.log" Jan 31 05:15:49 crc kubenswrapper[4827]: I0131 05:15:49.392493 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_eb9cc63c-f48f-4ded-9ca7-b7167b27a5ad/test-operator-logs-container/0.log" Jan 31 05:15:49 crc kubenswrapper[4827]: I0131 05:15:49.559284 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-gzznq_60326eb4-1b0c-420c-a0f1-e41d58f386a7/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:15:59 crc kubenswrapper[4827]: I0131 05:15:59.983902 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7n2dc"] Jan 31 05:15:59 crc kubenswrapper[4827]: E0131 05:15:59.984841 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab8f2153-531f-4737-ae78-82ca24e22b28" containerName="collect-profiles" Jan 31 05:15:59 crc kubenswrapper[4827]: I0131 05:15:59.984853 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8f2153-531f-4737-ae78-82ca24e22b28" containerName="collect-profiles" Jan 31 05:15:59 crc kubenswrapper[4827]: I0131 05:15:59.985087 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab8f2153-531f-4737-ae78-82ca24e22b28" containerName="collect-profiles" Jan 31 05:15:59 crc kubenswrapper[4827]: I0131 05:15:59.986310 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7n2dc" Jan 31 05:16:00 crc kubenswrapper[4827]: I0131 05:16:00.004448 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7n2dc"] Jan 31 05:16:00 crc kubenswrapper[4827]: I0131 05:16:00.114677 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5kg5\" (UniqueName: \"kubernetes.io/projected/c2c44da6-5c53-43ce-bd76-fc0622755f61-kube-api-access-s5kg5\") pod \"redhat-operators-7n2dc\" (UID: \"c2c44da6-5c53-43ce-bd76-fc0622755f61\") " pod="openshift-marketplace/redhat-operators-7n2dc" Jan 31 05:16:00 crc kubenswrapper[4827]: I0131 05:16:00.114727 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2c44da6-5c53-43ce-bd76-fc0622755f61-utilities\") pod \"redhat-operators-7n2dc\" (UID: \"c2c44da6-5c53-43ce-bd76-fc0622755f61\") " pod="openshift-marketplace/redhat-operators-7n2dc" Jan 31 05:16:00 crc kubenswrapper[4827]: I0131 05:16:00.114774 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2c44da6-5c53-43ce-bd76-fc0622755f61-catalog-content\") pod \"redhat-operators-7n2dc\" (UID: \"c2c44da6-5c53-43ce-bd76-fc0622755f61\") " pod="openshift-marketplace/redhat-operators-7n2dc" Jan 31 05:16:00 crc kubenswrapper[4827]: I0131 05:16:00.194472 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_220c4c53-ac13-4f85-88da-38fef6ce70b1/memcached/0.log" Jan 31 05:16:00 crc kubenswrapper[4827]: I0131 05:16:00.216424 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5kg5\" (UniqueName: \"kubernetes.io/projected/c2c44da6-5c53-43ce-bd76-fc0622755f61-kube-api-access-s5kg5\") pod \"redhat-operators-7n2dc\" (UID: \"c2c44da6-5c53-43ce-bd76-fc0622755f61\") " pod="openshift-marketplace/redhat-operators-7n2dc" Jan 31 05:16:00 crc kubenswrapper[4827]: I0131 05:16:00.216495 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2c44da6-5c53-43ce-bd76-fc0622755f61-utilities\") pod \"redhat-operators-7n2dc\" (UID: \"c2c44da6-5c53-43ce-bd76-fc0622755f61\") " pod="openshift-marketplace/redhat-operators-7n2dc" Jan 31 05:16:00 crc kubenswrapper[4827]: I0131 05:16:00.216567 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2c44da6-5c53-43ce-bd76-fc0622755f61-catalog-content\") pod \"redhat-operators-7n2dc\" (UID: \"c2c44da6-5c53-43ce-bd76-fc0622755f61\") " pod="openshift-marketplace/redhat-operators-7n2dc" Jan 31 05:16:00 crc kubenswrapper[4827]: I0131 05:16:00.217487 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2c44da6-5c53-43ce-bd76-fc0622755f61-catalog-content\") pod \"redhat-operators-7n2dc\" (UID: \"c2c44da6-5c53-43ce-bd76-fc0622755f61\") " pod="openshift-marketplace/redhat-operators-7n2dc" Jan 31 05:16:00 crc kubenswrapper[4827]: I0131 05:16:00.217987 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2c44da6-5c53-43ce-bd76-fc0622755f61-utilities\") pod \"redhat-operators-7n2dc\" (UID: \"c2c44da6-5c53-43ce-bd76-fc0622755f61\") " pod="openshift-marketplace/redhat-operators-7n2dc" Jan 31 05:16:00 crc kubenswrapper[4827]: I0131 05:16:00.236442 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5kg5\" (UniqueName: \"kubernetes.io/projected/c2c44da6-5c53-43ce-bd76-fc0622755f61-kube-api-access-s5kg5\") pod \"redhat-operators-7n2dc\" (UID: \"c2c44da6-5c53-43ce-bd76-fc0622755f61\") " pod="openshift-marketplace/redhat-operators-7n2dc" Jan 31 05:16:00 crc kubenswrapper[4827]: I0131 05:16:00.331235 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7n2dc" Jan 31 05:16:00 crc kubenswrapper[4827]: I0131 05:16:00.859811 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7n2dc"] Jan 31 05:16:01 crc kubenswrapper[4827]: I0131 05:16:01.052038 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7n2dc" event={"ID":"c2c44da6-5c53-43ce-bd76-fc0622755f61","Type":"ContainerStarted","Data":"8f3689e7d90e5740c14ea5d1a6dc7cb31865ccb0f76c14522c520243f72e2a5d"} Jan 31 05:16:02 crc kubenswrapper[4827]: I0131 05:16:02.060647 4827 generic.go:334] "Generic (PLEG): container finished" podID="c2c44da6-5c53-43ce-bd76-fc0622755f61" containerID="1a926481e5c3946ae3439aec0fe382e058c5ce160f11b068f438b639b68fed84" exitCode=0 Jan 31 05:16:02 crc kubenswrapper[4827]: I0131 05:16:02.060754 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7n2dc" event={"ID":"c2c44da6-5c53-43ce-bd76-fc0622755f61","Type":"ContainerDied","Data":"1a926481e5c3946ae3439aec0fe382e058c5ce160f11b068f438b639b68fed84"} Jan 31 05:16:02 crc kubenswrapper[4827]: I0131 05:16:02.064023 4827 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 05:16:03 crc kubenswrapper[4827]: I0131 05:16:03.069560 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7n2dc" event={"ID":"c2c44da6-5c53-43ce-bd76-fc0622755f61","Type":"ContainerStarted","Data":"73ab04b01c71f99176cc05ed4906272e7a936f6a42c727e367107acc0b3a58c9"} Jan 31 05:16:06 crc kubenswrapper[4827]: I0131 05:16:06.106371 4827 generic.go:334] "Generic (PLEG): container finished" podID="c2c44da6-5c53-43ce-bd76-fc0622755f61" containerID="73ab04b01c71f99176cc05ed4906272e7a936f6a42c727e367107acc0b3a58c9" exitCode=0 Jan 31 05:16:06 crc kubenswrapper[4827]: I0131 05:16:06.106457 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7n2dc" event={"ID":"c2c44da6-5c53-43ce-bd76-fc0622755f61","Type":"ContainerDied","Data":"73ab04b01c71f99176cc05ed4906272e7a936f6a42c727e367107acc0b3a58c9"} Jan 31 05:16:07 crc kubenswrapper[4827]: I0131 05:16:07.118337 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7n2dc" event={"ID":"c2c44da6-5c53-43ce-bd76-fc0622755f61","Type":"ContainerStarted","Data":"82c0860a8a28fb38b615f5f0dca820534aa8c8ba12c93bb6702f4f2055582730"} Jan 31 05:16:07 crc kubenswrapper[4827]: I0131 05:16:07.145142 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7n2dc" podStartSLOduration=3.641247505 podStartE2EDuration="8.145125159s" podCreationTimestamp="2026-01-31 05:15:59 +0000 UTC" firstStartedPulling="2026-01-31 05:16:02.063715727 +0000 UTC m=+5354.750796176" lastFinishedPulling="2026-01-31 05:16:06.567593381 +0000 UTC m=+5359.254673830" observedRunningTime="2026-01-31 05:16:07.143036763 +0000 UTC m=+5359.830117272" watchObservedRunningTime="2026-01-31 05:16:07.145125159 +0000 UTC m=+5359.832205598" Jan 31 05:16:10 crc kubenswrapper[4827]: I0131 05:16:10.331962 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7n2dc" Jan 31 05:16:10 crc kubenswrapper[4827]: I0131 05:16:10.332403 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7n2dc" Jan 31 05:16:11 crc kubenswrapper[4827]: I0131 05:16:11.377586 4827 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7n2dc" podUID="c2c44da6-5c53-43ce-bd76-fc0622755f61" containerName="registry-server" probeResult="failure" output=< Jan 31 05:16:11 crc kubenswrapper[4827]: timeout: failed to connect service ":50051" within 1s Jan 31 05:16:11 crc kubenswrapper[4827]: > Jan 31 05:16:16 crc kubenswrapper[4827]: I0131 05:16:16.952616 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9_00650213-91a7-4da4-956e-500845f8ec0d/util/0.log" Jan 31 05:16:17 crc kubenswrapper[4827]: I0131 05:16:17.153253 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9_00650213-91a7-4da4-956e-500845f8ec0d/util/0.log" Jan 31 05:16:17 crc kubenswrapper[4827]: I0131 05:16:17.189266 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9_00650213-91a7-4da4-956e-500845f8ec0d/pull/0.log" Jan 31 05:16:17 crc kubenswrapper[4827]: I0131 05:16:17.218277 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9_00650213-91a7-4da4-956e-500845f8ec0d/pull/0.log" Jan 31 05:16:17 crc kubenswrapper[4827]: I0131 05:16:17.349727 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9_00650213-91a7-4da4-956e-500845f8ec0d/util/0.log" Jan 31 05:16:17 crc kubenswrapper[4827]: I0131 05:16:17.385961 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9_00650213-91a7-4da4-956e-500845f8ec0d/extract/0.log" Jan 31 05:16:17 crc kubenswrapper[4827]: I0131 05:16:17.389557 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_99d2a1aa85d4e185bb5329cab20b126e8449fde1ef95df30fb4ec402714cts9_00650213-91a7-4da4-956e-500845f8ec0d/pull/0.log" Jan 31 05:16:17 crc kubenswrapper[4827]: I0131 05:16:17.604773 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-k469j_1ee58492-27e7-446f-84c8-c3b0b74884fa/manager/0.log" Jan 31 05:16:17 crc kubenswrapper[4827]: I0131 05:16:17.655647 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7489d7c99b-75s7f_74e68a52-8f24-4ff0-a160-8a1ad61238c9/manager/0.log" Jan 31 05:16:17 crc kubenswrapper[4827]: I0131 05:16:17.812876 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-9k4dq_60792734-916b-4bb7-a17f-45a03be036c8/manager/0.log" Jan 31 05:16:17 crc kubenswrapper[4827]: I0131 05:16:17.946677 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-zdtlh_c0c17a5a-5f0d-421e-b29c-56c4f2626a7b/manager/0.log" Jan 31 05:16:18 crc kubenswrapper[4827]: I0131 05:16:18.061665 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-hprpc_fe50fb01-1097-4ac9-81ae-fdfc96842f68/manager/0.log" Jan 31 05:16:18 crc kubenswrapper[4827]: I0131 05:16:18.122241 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-wwvbx_bbf882c7-842b-46eb-a459-bb628db2598f/manager/0.log" Jan 31 05:16:18 crc kubenswrapper[4827]: I0131 05:16:18.405637 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-r2ljw_adfd32af-9db4-468a-bac1-d33f11930922/manager/0.log" Jan 31 05:16:18 crc kubenswrapper[4827]: I0131 05:16:18.555352 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-gcs7k_00f00c32-1e04-42e4-95b4-923c6b57386e/manager/0.log" Jan 31 05:16:18 crc kubenswrapper[4827]: I0131 05:16:18.650568 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-8hvrl_efcd65a1-b55c-4cf6-bfe7-5e888e2bc7f0/manager/0.log" Jan 31 05:16:18 crc kubenswrapper[4827]: I0131 05:16:18.732120 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-2z575_fe5adffe-e198-4d4f-815d-02333b3a1853/manager/0.log" Jan 31 05:16:18 crc kubenswrapper[4827]: I0131 05:16:18.898189 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-dvj6j_7f0021a0-f8df-42fa-8ef0-34653130a6e9/manager/0.log" Jan 31 05:16:18 crc kubenswrapper[4827]: I0131 05:16:18.994585 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-wdrl7_ea6ee14b-2acc-4894-8d63-57ad4a6a170a/manager/0.log" Jan 31 05:16:19 crc kubenswrapper[4827]: I0131 05:16:19.189112 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-4snkb_8d904b59-3b07-422e-a83b-a02ac443d6eb/manager/0.log" Jan 31 05:16:19 crc kubenswrapper[4827]: I0131 05:16:19.198166 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-k7f4f_b3c58b9c-4561-49ae-a23c-a77a34b8cfb5/manager/0.log" Jan 31 05:16:19 crc kubenswrapper[4827]: I0131 05:16:19.324780 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dg9krf_ff81629a-d048-4c5d-b3a4-b892310ceff7/manager/0.log" Jan 31 05:16:19 crc kubenswrapper[4827]: I0131 05:16:19.619757 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-68ffdbb6cf-tmt7z_062d81b0-3054-4387-9b68-716c6b57c850/operator/0.log" Jan 31 05:16:19 crc kubenswrapper[4827]: I0131 05:16:19.929280 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-9l7pz_2ddccb17-c139-46fa-a62e-efdc15bbab1b/registry-server/0.log" Jan 31 05:16:20 crc kubenswrapper[4827]: I0131 05:16:20.033125 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-782zz_fb454f09-c6b8-41f4-b69f-3125e8d4d79f/manager/0.log" Jan 31 05:16:20 crc kubenswrapper[4827]: I0131 05:16:20.195554 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-9gs2r_0af88c77-1c9c-4072-b0da-707bca0f4f12/manager/0.log" Jan 31 05:16:20 crc kubenswrapper[4827]: I0131 05:16:20.385343 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7n2dc" Jan 31 05:16:20 crc kubenswrapper[4827]: I0131 05:16:20.427911 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zdjjp_0d85c53f-5192-4621-86cc-d9403773713b/operator/0.log" Jan 31 05:16:20 crc kubenswrapper[4827]: I0131 05:16:20.435533 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7n2dc" Jan 31 05:16:20 crc kubenswrapper[4827]: I0131 05:16:20.533514 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-6jhd8_ddb4ccbd-d7ed-4c26-97c4-22ce6c38b431/manager/0.log" Jan 31 05:16:20 crc kubenswrapper[4827]: I0131 05:16:20.700087 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-794bbdbc56-fvlbd_a7d7d7a5-296a-43d3-8c15-906a257549c2/manager/0.log" Jan 31 05:16:20 crc kubenswrapper[4827]: I0131 05:16:20.840789 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-plj6q_4d581cf6-c77f-4757-9091-cb1e23bfbcda/manager/0.log" Jan 31 05:16:20 crc kubenswrapper[4827]: I0131 05:16:20.875867 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-fr6qf_0d53929a-c249-47fa-9d02-98021a8bcf2a/manager/0.log" Jan 31 05:16:21 crc kubenswrapper[4827]: I0131 05:16:21.000311 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-m97nw_5666901d-66a6-4282-b44c-c39a0721faa2/manager/0.log" Jan 31 05:16:21 crc kubenswrapper[4827]: I0131 05:16:21.212504 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7n2dc"] Jan 31 05:16:22 crc kubenswrapper[4827]: I0131 05:16:22.254766 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7n2dc" podUID="c2c44da6-5c53-43ce-bd76-fc0622755f61" containerName="registry-server" containerID="cri-o://82c0860a8a28fb38b615f5f0dca820534aa8c8ba12c93bb6702f4f2055582730" gracePeriod=2 Jan 31 05:16:22 crc kubenswrapper[4827]: I0131 05:16:22.744137 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7n2dc" Jan 31 05:16:22 crc kubenswrapper[4827]: I0131 05:16:22.859575 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5kg5\" (UniqueName: \"kubernetes.io/projected/c2c44da6-5c53-43ce-bd76-fc0622755f61-kube-api-access-s5kg5\") pod \"c2c44da6-5c53-43ce-bd76-fc0622755f61\" (UID: \"c2c44da6-5c53-43ce-bd76-fc0622755f61\") " Jan 31 05:16:22 crc kubenswrapper[4827]: I0131 05:16:22.859715 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2c44da6-5c53-43ce-bd76-fc0622755f61-utilities\") pod \"c2c44da6-5c53-43ce-bd76-fc0622755f61\" (UID: \"c2c44da6-5c53-43ce-bd76-fc0622755f61\") " Jan 31 05:16:22 crc kubenswrapper[4827]: I0131 05:16:22.859751 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2c44da6-5c53-43ce-bd76-fc0622755f61-catalog-content\") pod \"c2c44da6-5c53-43ce-bd76-fc0622755f61\" (UID: \"c2c44da6-5c53-43ce-bd76-fc0622755f61\") " Jan 31 05:16:22 crc kubenswrapper[4827]: I0131 05:16:22.860277 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2c44da6-5c53-43ce-bd76-fc0622755f61-utilities" (OuterVolumeSpecName: "utilities") pod "c2c44da6-5c53-43ce-bd76-fc0622755f61" (UID: "c2c44da6-5c53-43ce-bd76-fc0622755f61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:16:22 crc kubenswrapper[4827]: I0131 05:16:22.860926 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2c44da6-5c53-43ce-bd76-fc0622755f61-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:16:22 crc kubenswrapper[4827]: I0131 05:16:22.865543 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2c44da6-5c53-43ce-bd76-fc0622755f61-kube-api-access-s5kg5" (OuterVolumeSpecName: "kube-api-access-s5kg5") pod "c2c44da6-5c53-43ce-bd76-fc0622755f61" (UID: "c2c44da6-5c53-43ce-bd76-fc0622755f61"). InnerVolumeSpecName "kube-api-access-s5kg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:16:22 crc kubenswrapper[4827]: I0131 05:16:22.960670 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2c44da6-5c53-43ce-bd76-fc0622755f61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2c44da6-5c53-43ce-bd76-fc0622755f61" (UID: "c2c44da6-5c53-43ce-bd76-fc0622755f61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:16:22 crc kubenswrapper[4827]: I0131 05:16:22.962569 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5kg5\" (UniqueName: \"kubernetes.io/projected/c2c44da6-5c53-43ce-bd76-fc0622755f61-kube-api-access-s5kg5\") on node \"crc\" DevicePath \"\"" Jan 31 05:16:22 crc kubenswrapper[4827]: I0131 05:16:22.962594 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2c44da6-5c53-43ce-bd76-fc0622755f61-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:16:23 crc kubenswrapper[4827]: I0131 05:16:23.266798 4827 generic.go:334] "Generic (PLEG): container finished" podID="c2c44da6-5c53-43ce-bd76-fc0622755f61" containerID="82c0860a8a28fb38b615f5f0dca820534aa8c8ba12c93bb6702f4f2055582730" exitCode=0 Jan 31 05:16:23 crc kubenswrapper[4827]: I0131 05:16:23.266845 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7n2dc" event={"ID":"c2c44da6-5c53-43ce-bd76-fc0622755f61","Type":"ContainerDied","Data":"82c0860a8a28fb38b615f5f0dca820534aa8c8ba12c93bb6702f4f2055582730"} Jan 31 05:16:23 crc kubenswrapper[4827]: I0131 05:16:23.266905 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7n2dc" event={"ID":"c2c44da6-5c53-43ce-bd76-fc0622755f61","Type":"ContainerDied","Data":"8f3689e7d90e5740c14ea5d1a6dc7cb31865ccb0f76c14522c520243f72e2a5d"} Jan 31 05:16:23 crc kubenswrapper[4827]: I0131 05:16:23.266925 4827 scope.go:117] "RemoveContainer" containerID="82c0860a8a28fb38b615f5f0dca820534aa8c8ba12c93bb6702f4f2055582730" Jan 31 05:16:23 crc kubenswrapper[4827]: I0131 05:16:23.267258 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7n2dc" Jan 31 05:16:23 crc kubenswrapper[4827]: I0131 05:16:23.294478 4827 scope.go:117] "RemoveContainer" containerID="73ab04b01c71f99176cc05ed4906272e7a936f6a42c727e367107acc0b3a58c9" Jan 31 05:16:23 crc kubenswrapper[4827]: I0131 05:16:23.345148 4827 scope.go:117] "RemoveContainer" containerID="1a926481e5c3946ae3439aec0fe382e058c5ce160f11b068f438b639b68fed84" Jan 31 05:16:23 crc kubenswrapper[4827]: I0131 05:16:23.345278 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7n2dc"] Jan 31 05:16:23 crc kubenswrapper[4827]: I0131 05:16:23.358869 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7n2dc"] Jan 31 05:16:23 crc kubenswrapper[4827]: I0131 05:16:23.376133 4827 scope.go:117] "RemoveContainer" containerID="82c0860a8a28fb38b615f5f0dca820534aa8c8ba12c93bb6702f4f2055582730" Jan 31 05:16:23 crc kubenswrapper[4827]: E0131 05:16:23.377196 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82c0860a8a28fb38b615f5f0dca820534aa8c8ba12c93bb6702f4f2055582730\": container with ID starting with 82c0860a8a28fb38b615f5f0dca820534aa8c8ba12c93bb6702f4f2055582730 not found: ID does not exist" containerID="82c0860a8a28fb38b615f5f0dca820534aa8c8ba12c93bb6702f4f2055582730" Jan 31 05:16:23 crc kubenswrapper[4827]: I0131 05:16:23.377302 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c0860a8a28fb38b615f5f0dca820534aa8c8ba12c93bb6702f4f2055582730"} err="failed to get container status \"82c0860a8a28fb38b615f5f0dca820534aa8c8ba12c93bb6702f4f2055582730\": rpc error: code = NotFound desc = could not find container \"82c0860a8a28fb38b615f5f0dca820534aa8c8ba12c93bb6702f4f2055582730\": container with ID starting with 82c0860a8a28fb38b615f5f0dca820534aa8c8ba12c93bb6702f4f2055582730 not found: ID does not exist" Jan 31 05:16:23 crc kubenswrapper[4827]: I0131 05:16:23.377397 4827 scope.go:117] "RemoveContainer" containerID="73ab04b01c71f99176cc05ed4906272e7a936f6a42c727e367107acc0b3a58c9" Jan 31 05:16:23 crc kubenswrapper[4827]: E0131 05:16:23.380250 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73ab04b01c71f99176cc05ed4906272e7a936f6a42c727e367107acc0b3a58c9\": container with ID starting with 73ab04b01c71f99176cc05ed4906272e7a936f6a42c727e367107acc0b3a58c9 not found: ID does not exist" containerID="73ab04b01c71f99176cc05ed4906272e7a936f6a42c727e367107acc0b3a58c9" Jan 31 05:16:23 crc kubenswrapper[4827]: I0131 05:16:23.380351 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73ab04b01c71f99176cc05ed4906272e7a936f6a42c727e367107acc0b3a58c9"} err="failed to get container status \"73ab04b01c71f99176cc05ed4906272e7a936f6a42c727e367107acc0b3a58c9\": rpc error: code = NotFound desc = could not find container \"73ab04b01c71f99176cc05ed4906272e7a936f6a42c727e367107acc0b3a58c9\": container with ID starting with 73ab04b01c71f99176cc05ed4906272e7a936f6a42c727e367107acc0b3a58c9 not found: ID does not exist" Jan 31 05:16:23 crc kubenswrapper[4827]: I0131 05:16:23.380415 4827 scope.go:117] "RemoveContainer" containerID="1a926481e5c3946ae3439aec0fe382e058c5ce160f11b068f438b639b68fed84" Jan 31 05:16:23 crc kubenswrapper[4827]: E0131 05:16:23.380630 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a926481e5c3946ae3439aec0fe382e058c5ce160f11b068f438b639b68fed84\": container with ID starting with 1a926481e5c3946ae3439aec0fe382e058c5ce160f11b068f438b639b68fed84 not found: ID does not exist" containerID="1a926481e5c3946ae3439aec0fe382e058c5ce160f11b068f438b639b68fed84" Jan 31 05:16:23 crc kubenswrapper[4827]: I0131 05:16:23.380710 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a926481e5c3946ae3439aec0fe382e058c5ce160f11b068f438b639b68fed84"} err="failed to get container status \"1a926481e5c3946ae3439aec0fe382e058c5ce160f11b068f438b639b68fed84\": rpc error: code = NotFound desc = could not find container \"1a926481e5c3946ae3439aec0fe382e058c5ce160f11b068f438b639b68fed84\": container with ID starting with 1a926481e5c3946ae3439aec0fe382e058c5ce160f11b068f438b639b68fed84 not found: ID does not exist" Jan 31 05:16:24 crc kubenswrapper[4827]: I0131 05:16:24.121069 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2c44da6-5c53-43ce-bd76-fc0622755f61" path="/var/lib/kubelet/pods/c2c44da6-5c53-43ce-bd76-fc0622755f61/volumes" Jan 31 05:16:39 crc kubenswrapper[4827]: I0131 05:16:39.689270 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mrplz_7bd339a8-f5bb-4f7f-9d9d-e57deef990b8/control-plane-machine-set-operator/0.log" Jan 31 05:16:39 crc kubenswrapper[4827]: I0131 05:16:39.916454 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-h2fxz_899b03ec-0d91-4793-a5a2-d3aca48e5309/kube-rbac-proxy/0.log" Jan 31 05:16:39 crc kubenswrapper[4827]: I0131 05:16:39.926396 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-h2fxz_899b03ec-0d91-4793-a5a2-d3aca48e5309/machine-api-operator/0.log" Jan 31 05:16:41 crc kubenswrapper[4827]: I0131 05:16:41.721894 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k76dn"] Jan 31 05:16:41 crc kubenswrapper[4827]: E0131 05:16:41.723764 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c44da6-5c53-43ce-bd76-fc0622755f61" containerName="extract-utilities" Jan 31 05:16:41 crc kubenswrapper[4827]: I0131 05:16:41.723853 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c44da6-5c53-43ce-bd76-fc0622755f61" containerName="extract-utilities" Jan 31 05:16:41 crc kubenswrapper[4827]: E0131 05:16:41.723953 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c44da6-5c53-43ce-bd76-fc0622755f61" containerName="extract-content" Jan 31 05:16:41 crc kubenswrapper[4827]: I0131 05:16:41.724039 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c44da6-5c53-43ce-bd76-fc0622755f61" containerName="extract-content" Jan 31 05:16:41 crc kubenswrapper[4827]: E0131 05:16:41.724125 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c44da6-5c53-43ce-bd76-fc0622755f61" containerName="registry-server" Jan 31 05:16:41 crc kubenswrapper[4827]: I0131 05:16:41.724182 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c44da6-5c53-43ce-bd76-fc0622755f61" containerName="registry-server" Jan 31 05:16:41 crc kubenswrapper[4827]: I0131 05:16:41.724422 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2c44da6-5c53-43ce-bd76-fc0622755f61" containerName="registry-server" Jan 31 05:16:41 crc kubenswrapper[4827]: I0131 05:16:41.725722 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k76dn" Jan 31 05:16:41 crc kubenswrapper[4827]: I0131 05:16:41.761734 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k76dn"] Jan 31 05:16:41 crc kubenswrapper[4827]: I0131 05:16:41.864285 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8267f298-a035-4131-bfde-192039bf02ab-utilities\") pod \"certified-operators-k76dn\" (UID: \"8267f298-a035-4131-bfde-192039bf02ab\") " pod="openshift-marketplace/certified-operators-k76dn" Jan 31 05:16:41 crc kubenswrapper[4827]: I0131 05:16:41.864951 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8267f298-a035-4131-bfde-192039bf02ab-catalog-content\") pod \"certified-operators-k76dn\" (UID: \"8267f298-a035-4131-bfde-192039bf02ab\") " pod="openshift-marketplace/certified-operators-k76dn" Jan 31 05:16:41 crc kubenswrapper[4827]: I0131 05:16:41.865242 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plshm\" (UniqueName: \"kubernetes.io/projected/8267f298-a035-4131-bfde-192039bf02ab-kube-api-access-plshm\") pod \"certified-operators-k76dn\" (UID: \"8267f298-a035-4131-bfde-192039bf02ab\") " pod="openshift-marketplace/certified-operators-k76dn" Jan 31 05:16:41 crc kubenswrapper[4827]: I0131 05:16:41.968081 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plshm\" (UniqueName: \"kubernetes.io/projected/8267f298-a035-4131-bfde-192039bf02ab-kube-api-access-plshm\") pod \"certified-operators-k76dn\" (UID: \"8267f298-a035-4131-bfde-192039bf02ab\") " pod="openshift-marketplace/certified-operators-k76dn" Jan 31 05:16:41 crc kubenswrapper[4827]: I0131 05:16:41.968244 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8267f298-a035-4131-bfde-192039bf02ab-utilities\") pod \"certified-operators-k76dn\" (UID: \"8267f298-a035-4131-bfde-192039bf02ab\") " pod="openshift-marketplace/certified-operators-k76dn" Jan 31 05:16:41 crc kubenswrapper[4827]: I0131 05:16:41.968399 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8267f298-a035-4131-bfde-192039bf02ab-catalog-content\") pod \"certified-operators-k76dn\" (UID: \"8267f298-a035-4131-bfde-192039bf02ab\") " pod="openshift-marketplace/certified-operators-k76dn" Jan 31 05:16:41 crc kubenswrapper[4827]: I0131 05:16:41.968891 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8267f298-a035-4131-bfde-192039bf02ab-utilities\") pod \"certified-operators-k76dn\" (UID: \"8267f298-a035-4131-bfde-192039bf02ab\") " pod="openshift-marketplace/certified-operators-k76dn" Jan 31 05:16:41 crc kubenswrapper[4827]: I0131 05:16:41.968890 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8267f298-a035-4131-bfde-192039bf02ab-catalog-content\") pod \"certified-operators-k76dn\" (UID: \"8267f298-a035-4131-bfde-192039bf02ab\") " pod="openshift-marketplace/certified-operators-k76dn" Jan 31 05:16:42 crc kubenswrapper[4827]: I0131 05:16:42.005404 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plshm\" (UniqueName: \"kubernetes.io/projected/8267f298-a035-4131-bfde-192039bf02ab-kube-api-access-plshm\") pod \"certified-operators-k76dn\" (UID: \"8267f298-a035-4131-bfde-192039bf02ab\") " pod="openshift-marketplace/certified-operators-k76dn" Jan 31 05:16:42 crc kubenswrapper[4827]: I0131 05:16:42.073303 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k76dn" Jan 31 05:16:42 crc kubenswrapper[4827]: I0131 05:16:42.551307 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k76dn"] Jan 31 05:16:43 crc kubenswrapper[4827]: I0131 05:16:43.464008 4827 generic.go:334] "Generic (PLEG): container finished" podID="8267f298-a035-4131-bfde-192039bf02ab" containerID="ed8bcea48d922c17efc012914d34b7d08b758eea360fa6164984f8d3a9283250" exitCode=0 Jan 31 05:16:43 crc kubenswrapper[4827]: I0131 05:16:43.464073 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k76dn" event={"ID":"8267f298-a035-4131-bfde-192039bf02ab","Type":"ContainerDied","Data":"ed8bcea48d922c17efc012914d34b7d08b758eea360fa6164984f8d3a9283250"} Jan 31 05:16:43 crc kubenswrapper[4827]: I0131 05:16:43.464514 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k76dn" event={"ID":"8267f298-a035-4131-bfde-192039bf02ab","Type":"ContainerStarted","Data":"038fdf55fee0eeb5d38e85ff72dca2de61e8d117919304b3df4b543c50c7bcde"} Jan 31 05:16:44 crc kubenswrapper[4827]: I0131 05:16:44.478439 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k76dn" event={"ID":"8267f298-a035-4131-bfde-192039bf02ab","Type":"ContainerStarted","Data":"a8552e2b3a4ee862f85044ac5637ae6d8ecdd9a49b9c99fb4bf056380f7ec64a"} Jan 31 05:16:46 crc kubenswrapper[4827]: I0131 05:16:46.505181 4827 generic.go:334] "Generic (PLEG): container finished" podID="8267f298-a035-4131-bfde-192039bf02ab" containerID="a8552e2b3a4ee862f85044ac5637ae6d8ecdd9a49b9c99fb4bf056380f7ec64a" exitCode=0 Jan 31 05:16:46 crc kubenswrapper[4827]: I0131 05:16:46.505415 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k76dn" event={"ID":"8267f298-a035-4131-bfde-192039bf02ab","Type":"ContainerDied","Data":"a8552e2b3a4ee862f85044ac5637ae6d8ecdd9a49b9c99fb4bf056380f7ec64a"} Jan 31 05:16:47 crc kubenswrapper[4827]: I0131 05:16:47.521198 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k76dn" event={"ID":"8267f298-a035-4131-bfde-192039bf02ab","Type":"ContainerStarted","Data":"8d129b94e8f9e223cbb94bf3b33423e144d26176480f969e64a42b843ec83d80"} Jan 31 05:16:47 crc kubenswrapper[4827]: I0131 05:16:47.543245 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k76dn" podStartSLOduration=3.062170138 podStartE2EDuration="6.543220613s" podCreationTimestamp="2026-01-31 05:16:41 +0000 UTC" firstStartedPulling="2026-01-31 05:16:43.466673465 +0000 UTC m=+5396.153753914" lastFinishedPulling="2026-01-31 05:16:46.94772394 +0000 UTC m=+5399.634804389" observedRunningTime="2026-01-31 05:16:47.542722958 +0000 UTC m=+5400.229803407" watchObservedRunningTime="2026-01-31 05:16:47.543220613 +0000 UTC m=+5400.230301072" Jan 31 05:16:52 crc kubenswrapper[4827]: I0131 05:16:52.074786 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k76dn" Jan 31 05:16:52 crc kubenswrapper[4827]: I0131 05:16:52.075132 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k76dn" Jan 31 05:16:52 crc kubenswrapper[4827]: I0131 05:16:52.127741 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k76dn" Jan 31 05:16:52 crc kubenswrapper[4827]: I0131 05:16:52.639107 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k76dn" Jan 31 05:16:53 crc kubenswrapper[4827]: I0131 05:16:53.313147 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k76dn"] Jan 31 05:16:53 crc kubenswrapper[4827]: I0131 05:16:53.669128 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-9hxtf_bef48f94-220d-4244-8412-0fbb3c3a08a6/cert-manager-controller/0.log" Jan 31 05:16:54 crc kubenswrapper[4827]: I0131 05:16:54.265344 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-bdz52_3f73fff6-a495-43d2-b063-ed9792fa2526/cert-manager-cainjector/0.log" Jan 31 05:16:54 crc kubenswrapper[4827]: I0131 05:16:54.327703 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-lg7rt_8f78df48-021e-4d81-afac-ae4dc1b7f932/cert-manager-webhook/0.log" Jan 31 05:16:54 crc kubenswrapper[4827]: I0131 05:16:54.590169 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k76dn" podUID="8267f298-a035-4131-bfde-192039bf02ab" containerName="registry-server" containerID="cri-o://8d129b94e8f9e223cbb94bf3b33423e144d26176480f969e64a42b843ec83d80" gracePeriod=2 Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.151413 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k76dn" Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.178565 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8267f298-a035-4131-bfde-192039bf02ab-utilities\") pod \"8267f298-a035-4131-bfde-192039bf02ab\" (UID: \"8267f298-a035-4131-bfde-192039bf02ab\") " Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.179087 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8267f298-a035-4131-bfde-192039bf02ab-catalog-content\") pod \"8267f298-a035-4131-bfde-192039bf02ab\" (UID: \"8267f298-a035-4131-bfde-192039bf02ab\") " Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.182512 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plshm\" (UniqueName: \"kubernetes.io/projected/8267f298-a035-4131-bfde-192039bf02ab-kube-api-access-plshm\") pod \"8267f298-a035-4131-bfde-192039bf02ab\" (UID: \"8267f298-a035-4131-bfde-192039bf02ab\") " Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.179625 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8267f298-a035-4131-bfde-192039bf02ab-utilities" (OuterVolumeSpecName: "utilities") pod "8267f298-a035-4131-bfde-192039bf02ab" (UID: "8267f298-a035-4131-bfde-192039bf02ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.202127 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8267f298-a035-4131-bfde-192039bf02ab-kube-api-access-plshm" (OuterVolumeSpecName: "kube-api-access-plshm") pod "8267f298-a035-4131-bfde-192039bf02ab" (UID: "8267f298-a035-4131-bfde-192039bf02ab"). InnerVolumeSpecName "kube-api-access-plshm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.269200 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8267f298-a035-4131-bfde-192039bf02ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8267f298-a035-4131-bfde-192039bf02ab" (UID: "8267f298-a035-4131-bfde-192039bf02ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.286112 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8267f298-a035-4131-bfde-192039bf02ab-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.286158 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8267f298-a035-4131-bfde-192039bf02ab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.286170 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plshm\" (UniqueName: \"kubernetes.io/projected/8267f298-a035-4131-bfde-192039bf02ab-kube-api-access-plshm\") on node \"crc\" DevicePath \"\"" Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.599779 4827 generic.go:334] "Generic (PLEG): container finished" podID="8267f298-a035-4131-bfde-192039bf02ab" containerID="8d129b94e8f9e223cbb94bf3b33423e144d26176480f969e64a42b843ec83d80" exitCode=0 Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.599822 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k76dn" event={"ID":"8267f298-a035-4131-bfde-192039bf02ab","Type":"ContainerDied","Data":"8d129b94e8f9e223cbb94bf3b33423e144d26176480f969e64a42b843ec83d80"} Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.599856 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k76dn" event={"ID":"8267f298-a035-4131-bfde-192039bf02ab","Type":"ContainerDied","Data":"038fdf55fee0eeb5d38e85ff72dca2de61e8d117919304b3df4b543c50c7bcde"} Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.599890 4827 scope.go:117] "RemoveContainer" containerID="8d129b94e8f9e223cbb94bf3b33423e144d26176480f969e64a42b843ec83d80" Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.600044 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k76dn" Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.622667 4827 scope.go:117] "RemoveContainer" containerID="a8552e2b3a4ee862f85044ac5637ae6d8ecdd9a49b9c99fb4bf056380f7ec64a" Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.643626 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k76dn"] Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.652734 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k76dn"] Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.654420 4827 scope.go:117] "RemoveContainer" containerID="ed8bcea48d922c17efc012914d34b7d08b758eea360fa6164984f8d3a9283250" Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.702065 4827 scope.go:117] "RemoveContainer" containerID="8d129b94e8f9e223cbb94bf3b33423e144d26176480f969e64a42b843ec83d80" Jan 31 05:16:55 crc kubenswrapper[4827]: E0131 05:16:55.702644 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d129b94e8f9e223cbb94bf3b33423e144d26176480f969e64a42b843ec83d80\": container with ID starting with 8d129b94e8f9e223cbb94bf3b33423e144d26176480f969e64a42b843ec83d80 not found: ID does not exist" containerID="8d129b94e8f9e223cbb94bf3b33423e144d26176480f969e64a42b843ec83d80" Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.702744 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d129b94e8f9e223cbb94bf3b33423e144d26176480f969e64a42b843ec83d80"} err="failed to get container status \"8d129b94e8f9e223cbb94bf3b33423e144d26176480f969e64a42b843ec83d80\": rpc error: code = NotFound desc = could not find container \"8d129b94e8f9e223cbb94bf3b33423e144d26176480f969e64a42b843ec83d80\": container with ID starting with 8d129b94e8f9e223cbb94bf3b33423e144d26176480f969e64a42b843ec83d80 not found: ID does not exist" Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.702829 4827 scope.go:117] "RemoveContainer" containerID="a8552e2b3a4ee862f85044ac5637ae6d8ecdd9a49b9c99fb4bf056380f7ec64a" Jan 31 05:16:55 crc kubenswrapper[4827]: E0131 05:16:55.708830 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8552e2b3a4ee862f85044ac5637ae6d8ecdd9a49b9c99fb4bf056380f7ec64a\": container with ID starting with a8552e2b3a4ee862f85044ac5637ae6d8ecdd9a49b9c99fb4bf056380f7ec64a not found: ID does not exist" containerID="a8552e2b3a4ee862f85044ac5637ae6d8ecdd9a49b9c99fb4bf056380f7ec64a" Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.708890 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8552e2b3a4ee862f85044ac5637ae6d8ecdd9a49b9c99fb4bf056380f7ec64a"} err="failed to get container status \"a8552e2b3a4ee862f85044ac5637ae6d8ecdd9a49b9c99fb4bf056380f7ec64a\": rpc error: code = NotFound desc = could not find container \"a8552e2b3a4ee862f85044ac5637ae6d8ecdd9a49b9c99fb4bf056380f7ec64a\": container with ID starting with a8552e2b3a4ee862f85044ac5637ae6d8ecdd9a49b9c99fb4bf056380f7ec64a not found: ID does not exist" Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.708918 4827 scope.go:117] "RemoveContainer" containerID="ed8bcea48d922c17efc012914d34b7d08b758eea360fa6164984f8d3a9283250" Jan 31 05:16:55 crc kubenswrapper[4827]: E0131 05:16:55.709573 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed8bcea48d922c17efc012914d34b7d08b758eea360fa6164984f8d3a9283250\": container with ID starting with ed8bcea48d922c17efc012914d34b7d08b758eea360fa6164984f8d3a9283250 not found: ID does not exist" containerID="ed8bcea48d922c17efc012914d34b7d08b758eea360fa6164984f8d3a9283250" Jan 31 05:16:55 crc kubenswrapper[4827]: I0131 05:16:55.709630 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed8bcea48d922c17efc012914d34b7d08b758eea360fa6164984f8d3a9283250"} err="failed to get container status \"ed8bcea48d922c17efc012914d34b7d08b758eea360fa6164984f8d3a9283250\": rpc error: code = NotFound desc = could not find container \"ed8bcea48d922c17efc012914d34b7d08b758eea360fa6164984f8d3a9283250\": container with ID starting with ed8bcea48d922c17efc012914d34b7d08b758eea360fa6164984f8d3a9283250 not found: ID does not exist" Jan 31 05:16:55 crc kubenswrapper[4827]: E0131 05:16:55.752726 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8267f298_a035_4131_bfde_192039bf02ab.slice/crio-038fdf55fee0eeb5d38e85ff72dca2de61e8d117919304b3df4b543c50c7bcde\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8267f298_a035_4131_bfde_192039bf02ab.slice\": RecentStats: unable to find data in memory cache]" Jan 31 05:16:56 crc kubenswrapper[4827]: I0131 05:16:56.123364 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8267f298-a035-4131-bfde-192039bf02ab" path="/var/lib/kubelet/pods/8267f298-a035-4131-bfde-192039bf02ab/volumes" Jan 31 05:17:08 crc kubenswrapper[4827]: I0131 05:17:08.316768 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-8c97f_fd972f7a-fbf4-449b-b1d2-59d0dbe4aa64/nmstate-console-plugin/0.log" Jan 31 05:17:08 crc kubenswrapper[4827]: I0131 05:17:08.505066 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-b7ttc_d7ef57c6-3ae9-4573-8ba2-ec11f418a0b6/nmstate-handler/0.log" Jan 31 05:17:08 crc kubenswrapper[4827]: I0131 05:17:08.571117 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-vxb8z_259273b1-36c1-4c94-846c-dd21b325059d/kube-rbac-proxy/0.log" Jan 31 05:17:08 crc kubenswrapper[4827]: I0131 05:17:08.650119 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-vxb8z_259273b1-36c1-4c94-846c-dd21b325059d/nmstate-metrics/0.log" Jan 31 05:17:08 crc kubenswrapper[4827]: I0131 05:17:08.658434 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-g5wcr_5899df86-4812-4477-92fb-bcd326c34f2a/nmstate-operator/0.log" Jan 31 05:17:08 crc kubenswrapper[4827]: I0131 05:17:08.826556 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-blkw2_0abf6fbb-878e-4f5f-99ef-969e12458804/nmstate-webhook/0.log" Jan 31 05:17:36 crc kubenswrapper[4827]: I0131 05:17:36.160545 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lxksh_3aa38fee-8a56-42e4-9921-52dfdc3550c0/kube-rbac-proxy/0.log" Jan 31 05:17:36 crc kubenswrapper[4827]: I0131 05:17:36.263094 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lxksh_3aa38fee-8a56-42e4-9921-52dfdc3550c0/controller/0.log" Jan 31 05:17:36 crc kubenswrapper[4827]: I0131 05:17:36.395635 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-frr-files/0.log" Jan 31 05:17:36 crc kubenswrapper[4827]: I0131 05:17:36.603230 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-metrics/0.log" Jan 31 05:17:36 crc kubenswrapper[4827]: I0131 05:17:36.603597 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-frr-files/0.log" Jan 31 05:17:36 crc kubenswrapper[4827]: I0131 05:17:36.603752 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-reloader/0.log" Jan 31 05:17:36 crc kubenswrapper[4827]: I0131 05:17:36.651294 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-reloader/0.log" Jan 31 05:17:36 crc kubenswrapper[4827]: I0131 05:17:36.776345 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-frr-files/0.log" Jan 31 05:17:36 crc kubenswrapper[4827]: I0131 05:17:36.806594 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-metrics/0.log" Jan 31 05:17:36 crc kubenswrapper[4827]: I0131 05:17:36.807321 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-reloader/0.log" Jan 31 05:17:36 crc kubenswrapper[4827]: I0131 05:17:36.898328 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-metrics/0.log" Jan 31 05:17:37 crc kubenswrapper[4827]: I0131 05:17:37.012081 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-frr-files/0.log" Jan 31 05:17:37 crc kubenswrapper[4827]: I0131 05:17:37.015071 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-reloader/0.log" Jan 31 05:17:37 crc kubenswrapper[4827]: I0131 05:17:37.022178 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/cp-metrics/0.log" Jan 31 05:17:37 crc kubenswrapper[4827]: I0131 05:17:37.091792 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/controller/0.log" Jan 31 05:17:37 crc kubenswrapper[4827]: I0131 05:17:37.184239 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/frr-metrics/0.log" Jan 31 05:17:37 crc kubenswrapper[4827]: I0131 05:17:37.211057 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/kube-rbac-proxy/0.log" Jan 31 05:17:37 crc kubenswrapper[4827]: I0131 05:17:37.282146 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/kube-rbac-proxy-frr/0.log" Jan 31 05:17:37 crc kubenswrapper[4827]: I0131 05:17:37.410206 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/reloader/0.log" Jan 31 05:17:37 crc kubenswrapper[4827]: I0131 05:17:37.536511 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-4q2p4_aa13a755-e11c-471f-9318-7f0b54e8889e/frr-k8s-webhook-server/0.log" Jan 31 05:17:37 crc kubenswrapper[4827]: I0131 05:17:37.674142 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5dfffc88b-rknwp_ab6e6231-c7d2-4c65-89d2-bd6771c99585/manager/0.log" Jan 31 05:17:37 crc kubenswrapper[4827]: I0131 05:17:37.834034 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6997fd6b6c-rxw9p_cac01594-063e-4099-b7fc-11e5d034cd2c/webhook-server/0.log" Jan 31 05:17:37 crc kubenswrapper[4827]: I0131 05:17:37.991777 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tp4jl_44cea0f8-c757-4c9e-bd44-210bed605301/kube-rbac-proxy/0.log" Jan 31 05:17:38 crc kubenswrapper[4827]: I0131 05:17:38.556985 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tp4jl_44cea0f8-c757-4c9e-bd44-210bed605301/speaker/0.log" Jan 31 05:17:38 crc kubenswrapper[4827]: I0131 05:17:38.926979 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7t7_ac6685a1-0994-4fb9-afe1-3454c8525094/frr/0.log" Jan 31 05:17:47 crc kubenswrapper[4827]: I0131 05:17:47.371562 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:17:47 crc kubenswrapper[4827]: I0131 05:17:47.372368 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:17:52 crc kubenswrapper[4827]: I0131 05:17:52.792002 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs_4f7eae5f-3ee4-478f-928c-ee25fab2d488/util/0.log" Jan 31 05:17:53 crc kubenswrapper[4827]: I0131 05:17:53.005190 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs_4f7eae5f-3ee4-478f-928c-ee25fab2d488/util/0.log" Jan 31 05:17:53 crc kubenswrapper[4827]: I0131 05:17:53.013332 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs_4f7eae5f-3ee4-478f-928c-ee25fab2d488/pull/0.log" Jan 31 05:17:53 crc kubenswrapper[4827]: I0131 05:17:53.027953 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs_4f7eae5f-3ee4-478f-928c-ee25fab2d488/pull/0.log" Jan 31 05:17:53 crc kubenswrapper[4827]: I0131 05:17:53.166622 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs_4f7eae5f-3ee4-478f-928c-ee25fab2d488/util/0.log" Jan 31 05:17:53 crc kubenswrapper[4827]: I0131 05:17:53.203503 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs_4f7eae5f-3ee4-478f-928c-ee25fab2d488/pull/0.log" Jan 31 05:17:53 crc kubenswrapper[4827]: I0131 05:17:53.234285 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgtsxs_4f7eae5f-3ee4-478f-928c-ee25fab2d488/extract/0.log" Jan 31 05:17:53 crc kubenswrapper[4827]: I0131 05:17:53.363253 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4_3bd71c58-cce1-40f3-b951-8b414eec7cd6/util/0.log" Jan 31 05:17:53 crc kubenswrapper[4827]: I0131 05:17:53.505489 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4_3bd71c58-cce1-40f3-b951-8b414eec7cd6/util/0.log" Jan 31 05:17:53 crc kubenswrapper[4827]: I0131 05:17:53.552112 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4_3bd71c58-cce1-40f3-b951-8b414eec7cd6/pull/0.log" Jan 31 05:17:53 crc kubenswrapper[4827]: I0131 05:17:53.608343 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4_3bd71c58-cce1-40f3-b951-8b414eec7cd6/pull/0.log" Jan 31 05:17:53 crc kubenswrapper[4827]: I0131 05:17:53.718993 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4_3bd71c58-cce1-40f3-b951-8b414eec7cd6/pull/0.log" Jan 31 05:17:53 crc kubenswrapper[4827]: I0131 05:17:53.755909 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4_3bd71c58-cce1-40f3-b951-8b414eec7cd6/util/0.log" Jan 31 05:17:53 crc kubenswrapper[4827]: I0131 05:17:53.772810 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dbfg4_3bd71c58-cce1-40f3-b951-8b414eec7cd6/extract/0.log" Jan 31 05:17:53 crc kubenswrapper[4827]: I0131 05:17:53.915636 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p9mnx_a985eee9-b75b-499b-bbdb-fb1f3437ff77/extract-utilities/0.log" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.094782 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p9mnx_a985eee9-b75b-499b-bbdb-fb1f3437ff77/extract-utilities/0.log" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.107739 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p9mnx_a985eee9-b75b-499b-bbdb-fb1f3437ff77/extract-content/0.log" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.148138 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p9mnx_a985eee9-b75b-499b-bbdb-fb1f3437ff77/extract-content/0.log" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.307817 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p9mnx_a985eee9-b75b-499b-bbdb-fb1f3437ff77/extract-utilities/0.log" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.314181 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p9mnx_a985eee9-b75b-499b-bbdb-fb1f3437ff77/extract-content/0.log" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.532698 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ppx2z_4cf906b5-5bd6-43ba-82b4-008d0b9f7b35/extract-utilities/0.log" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.730188 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ppx2z_4cf906b5-5bd6-43ba-82b4-008d0b9f7b35/extract-content/0.log" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.779802 4827 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fb4dn"] Jan 31 05:17:54 crc kubenswrapper[4827]: E0131 05:17:54.780264 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8267f298-a035-4131-bfde-192039bf02ab" containerName="registry-server" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.780282 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="8267f298-a035-4131-bfde-192039bf02ab" containerName="registry-server" Jan 31 05:17:54 crc kubenswrapper[4827]: E0131 05:17:54.780301 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8267f298-a035-4131-bfde-192039bf02ab" containerName="extract-utilities" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.780311 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="8267f298-a035-4131-bfde-192039bf02ab" containerName="extract-utilities" Jan 31 05:17:54 crc kubenswrapper[4827]: E0131 05:17:54.780318 4827 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8267f298-a035-4131-bfde-192039bf02ab" containerName="extract-content" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.780323 4827 state_mem.go:107] "Deleted CPUSet assignment" podUID="8267f298-a035-4131-bfde-192039bf02ab" containerName="extract-content" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.780502 4827 memory_manager.go:354] "RemoveStaleState removing state" podUID="8267f298-a035-4131-bfde-192039bf02ab" containerName="registry-server" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.781788 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fb4dn" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.809574 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fb4dn"] Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.813057 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p9mnx_a985eee9-b75b-499b-bbdb-fb1f3437ff77/registry-server/0.log" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.847289 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ppx2z_4cf906b5-5bd6-43ba-82b4-008d0b9f7b35/extract-utilities/0.log" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.857172 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ppx2z_4cf906b5-5bd6-43ba-82b4-008d0b9f7b35/extract-content/0.log" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.874015 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fce16dbc-70a1-43e4-b47e-a5746ac5dc68-catalog-content\") pod \"redhat-marketplace-fb4dn\" (UID: \"fce16dbc-70a1-43e4-b47e-a5746ac5dc68\") " pod="openshift-marketplace/redhat-marketplace-fb4dn" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.874342 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp8xd\" (UniqueName: \"kubernetes.io/projected/fce16dbc-70a1-43e4-b47e-a5746ac5dc68-kube-api-access-jp8xd\") pod \"redhat-marketplace-fb4dn\" (UID: \"fce16dbc-70a1-43e4-b47e-a5746ac5dc68\") " pod="openshift-marketplace/redhat-marketplace-fb4dn" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.874611 4827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fce16dbc-70a1-43e4-b47e-a5746ac5dc68-utilities\") pod \"redhat-marketplace-fb4dn\" (UID: \"fce16dbc-70a1-43e4-b47e-a5746ac5dc68\") " pod="openshift-marketplace/redhat-marketplace-fb4dn" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.976432 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fce16dbc-70a1-43e4-b47e-a5746ac5dc68-utilities\") pod \"redhat-marketplace-fb4dn\" (UID: \"fce16dbc-70a1-43e4-b47e-a5746ac5dc68\") " pod="openshift-marketplace/redhat-marketplace-fb4dn" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.976563 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fce16dbc-70a1-43e4-b47e-a5746ac5dc68-catalog-content\") pod \"redhat-marketplace-fb4dn\" (UID: \"fce16dbc-70a1-43e4-b47e-a5746ac5dc68\") " pod="openshift-marketplace/redhat-marketplace-fb4dn" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.976627 4827 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp8xd\" (UniqueName: \"kubernetes.io/projected/fce16dbc-70a1-43e4-b47e-a5746ac5dc68-kube-api-access-jp8xd\") pod \"redhat-marketplace-fb4dn\" (UID: \"fce16dbc-70a1-43e4-b47e-a5746ac5dc68\") " pod="openshift-marketplace/redhat-marketplace-fb4dn" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.976933 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fce16dbc-70a1-43e4-b47e-a5746ac5dc68-utilities\") pod \"redhat-marketplace-fb4dn\" (UID: \"fce16dbc-70a1-43e4-b47e-a5746ac5dc68\") " pod="openshift-marketplace/redhat-marketplace-fb4dn" Jan 31 05:17:54 crc kubenswrapper[4827]: I0131 05:17:54.977062 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fce16dbc-70a1-43e4-b47e-a5746ac5dc68-catalog-content\") pod \"redhat-marketplace-fb4dn\" (UID: \"fce16dbc-70a1-43e4-b47e-a5746ac5dc68\") " pod="openshift-marketplace/redhat-marketplace-fb4dn" Jan 31 05:17:55 crc kubenswrapper[4827]: I0131 05:17:55.001121 4827 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp8xd\" (UniqueName: \"kubernetes.io/projected/fce16dbc-70a1-43e4-b47e-a5746ac5dc68-kube-api-access-jp8xd\") pod \"redhat-marketplace-fb4dn\" (UID: \"fce16dbc-70a1-43e4-b47e-a5746ac5dc68\") " pod="openshift-marketplace/redhat-marketplace-fb4dn" Jan 31 05:17:55 crc kubenswrapper[4827]: I0131 05:17:55.042169 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ppx2z_4cf906b5-5bd6-43ba-82b4-008d0b9f7b35/extract-utilities/0.log" Jan 31 05:17:55 crc kubenswrapper[4827]: I0131 05:17:55.087251 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ppx2z_4cf906b5-5bd6-43ba-82b4-008d0b9f7b35/extract-content/0.log" Jan 31 05:17:55 crc kubenswrapper[4827]: I0131 05:17:55.110174 4827 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fb4dn" Jan 31 05:17:55 crc kubenswrapper[4827]: I0131 05:17:55.477285 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-qg47f_14a103c0-b784-4634-9d0e-07cccc0795ef/marketplace-operator/0.log" Jan 31 05:17:55 crc kubenswrapper[4827]: I0131 05:17:55.516029 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sgpz2_9405c6d0-837d-47f0-be6c-79518c22405d/extract-utilities/0.log" Jan 31 05:17:55 crc kubenswrapper[4827]: I0131 05:17:55.655739 4827 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fb4dn"] Jan 31 05:17:55 crc kubenswrapper[4827]: I0131 05:17:55.711023 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sgpz2_9405c6d0-837d-47f0-be6c-79518c22405d/extract-content/0.log" Jan 31 05:17:55 crc kubenswrapper[4827]: I0131 05:17:55.719280 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sgpz2_9405c6d0-837d-47f0-be6c-79518c22405d/extract-utilities/0.log" Jan 31 05:17:55 crc kubenswrapper[4827]: I0131 05:17:55.786199 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sgpz2_9405c6d0-837d-47f0-be6c-79518c22405d/extract-content/0.log" Jan 31 05:17:55 crc kubenswrapper[4827]: I0131 05:17:55.904600 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ppx2z_4cf906b5-5bd6-43ba-82b4-008d0b9f7b35/registry-server/0.log" Jan 31 05:17:56 crc kubenswrapper[4827]: I0131 05:17:56.014577 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sgpz2_9405c6d0-837d-47f0-be6c-79518c22405d/extract-content/0.log" Jan 31 05:17:56 crc kubenswrapper[4827]: I0131 05:17:56.059042 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sgpz2_9405c6d0-837d-47f0-be6c-79518c22405d/extract-utilities/0.log" Jan 31 05:17:56 crc kubenswrapper[4827]: I0131 05:17:56.192428 4827 generic.go:334] "Generic (PLEG): container finished" podID="fce16dbc-70a1-43e4-b47e-a5746ac5dc68" containerID="5b9114f8ab556da614eaef6a6535ee5023eeb6fe7ae605c3032fd71a46a6dc1b" exitCode=0 Jan 31 05:17:56 crc kubenswrapper[4827]: I0131 05:17:56.192547 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fb4dn" event={"ID":"fce16dbc-70a1-43e4-b47e-a5746ac5dc68","Type":"ContainerDied","Data":"5b9114f8ab556da614eaef6a6535ee5023eeb6fe7ae605c3032fd71a46a6dc1b"} Jan 31 05:17:56 crc kubenswrapper[4827]: I0131 05:17:56.192754 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fb4dn" event={"ID":"fce16dbc-70a1-43e4-b47e-a5746ac5dc68","Type":"ContainerStarted","Data":"24d142a290d201d1ddce8d62996745d4c859035fbd9a63087c4ef714eb73c1b5"} Jan 31 05:17:56 crc kubenswrapper[4827]: I0131 05:17:56.254778 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pnr9t_77ae4727-de92-4d11-b951-9b2a734acc65/extract-utilities/0.log" Jan 31 05:17:56 crc kubenswrapper[4827]: I0131 05:17:56.264177 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sgpz2_9405c6d0-837d-47f0-be6c-79518c22405d/registry-server/0.log" Jan 31 05:17:56 crc kubenswrapper[4827]: I0131 05:17:56.416034 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pnr9t_77ae4727-de92-4d11-b951-9b2a734acc65/extract-utilities/0.log" Jan 31 05:17:56 crc kubenswrapper[4827]: I0131 05:17:56.488726 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pnr9t_77ae4727-de92-4d11-b951-9b2a734acc65/extract-content/0.log" Jan 31 05:17:56 crc kubenswrapper[4827]: I0131 05:17:56.493110 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pnr9t_77ae4727-de92-4d11-b951-9b2a734acc65/extract-content/0.log" Jan 31 05:17:56 crc kubenswrapper[4827]: I0131 05:17:56.658792 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pnr9t_77ae4727-de92-4d11-b951-9b2a734acc65/extract-content/0.log" Jan 31 05:17:56 crc kubenswrapper[4827]: I0131 05:17:56.660520 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pnr9t_77ae4727-de92-4d11-b951-9b2a734acc65/extract-utilities/0.log" Jan 31 05:17:57 crc kubenswrapper[4827]: I0131 05:17:57.713244 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pnr9t_77ae4727-de92-4d11-b951-9b2a734acc65/registry-server/0.log" Jan 31 05:17:58 crc kubenswrapper[4827]: I0131 05:17:58.223540 4827 generic.go:334] "Generic (PLEG): container finished" podID="fce16dbc-70a1-43e4-b47e-a5746ac5dc68" containerID="399cdf6624359ab6afcef1415b1b1511ff05854502e1dc73aac70988b31b28bd" exitCode=0 Jan 31 05:17:58 crc kubenswrapper[4827]: I0131 05:17:58.223959 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fb4dn" event={"ID":"fce16dbc-70a1-43e4-b47e-a5746ac5dc68","Type":"ContainerDied","Data":"399cdf6624359ab6afcef1415b1b1511ff05854502e1dc73aac70988b31b28bd"} Jan 31 05:17:59 crc kubenswrapper[4827]: I0131 05:17:59.242929 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fb4dn" event={"ID":"fce16dbc-70a1-43e4-b47e-a5746ac5dc68","Type":"ContainerStarted","Data":"eab50d808ba5d7e5f3393e490856b3235017e6bbca4bd6f3d050b9e0f0b5e01b"} Jan 31 05:17:59 crc kubenswrapper[4827]: I0131 05:17:59.278659 4827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fb4dn" podStartSLOduration=2.847441454 podStartE2EDuration="5.278598812s" podCreationTimestamp="2026-01-31 05:17:54 +0000 UTC" firstStartedPulling="2026-01-31 05:17:56.195445078 +0000 UTC m=+5468.882525527" lastFinishedPulling="2026-01-31 05:17:58.626602426 +0000 UTC m=+5471.313682885" observedRunningTime="2026-01-31 05:17:59.275517017 +0000 UTC m=+5471.962597466" watchObservedRunningTime="2026-01-31 05:17:59.278598812 +0000 UTC m=+5471.965679261" Jan 31 05:18:05 crc kubenswrapper[4827]: I0131 05:18:05.110648 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fb4dn" Jan 31 05:18:05 crc kubenswrapper[4827]: I0131 05:18:05.111273 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fb4dn" Jan 31 05:18:05 crc kubenswrapper[4827]: I0131 05:18:05.190472 4827 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fb4dn" Jan 31 05:18:05 crc kubenswrapper[4827]: I0131 05:18:05.381203 4827 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fb4dn" Jan 31 05:18:05 crc kubenswrapper[4827]: I0131 05:18:05.437954 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fb4dn"] Jan 31 05:18:07 crc kubenswrapper[4827]: I0131 05:18:07.328173 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fb4dn" podUID="fce16dbc-70a1-43e4-b47e-a5746ac5dc68" containerName="registry-server" containerID="cri-o://eab50d808ba5d7e5f3393e490856b3235017e6bbca4bd6f3d050b9e0f0b5e01b" gracePeriod=2 Jan 31 05:18:07 crc kubenswrapper[4827]: E0131 05:18:07.459775 4827 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfce16dbc_70a1_43e4_b47e_a5746ac5dc68.slice/crio-eab50d808ba5d7e5f3393e490856b3235017e6bbca4bd6f3d050b9e0f0b5e01b.scope\": RecentStats: unable to find data in memory cache]" Jan 31 05:18:07 crc kubenswrapper[4827]: I0131 05:18:07.818630 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fb4dn" Jan 31 05:18:07 crc kubenswrapper[4827]: I0131 05:18:07.876564 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fce16dbc-70a1-43e4-b47e-a5746ac5dc68-utilities\") pod \"fce16dbc-70a1-43e4-b47e-a5746ac5dc68\" (UID: \"fce16dbc-70a1-43e4-b47e-a5746ac5dc68\") " Jan 31 05:18:07 crc kubenswrapper[4827]: I0131 05:18:07.876654 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fce16dbc-70a1-43e4-b47e-a5746ac5dc68-catalog-content\") pod \"fce16dbc-70a1-43e4-b47e-a5746ac5dc68\" (UID: \"fce16dbc-70a1-43e4-b47e-a5746ac5dc68\") " Jan 31 05:18:07 crc kubenswrapper[4827]: I0131 05:18:07.876835 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp8xd\" (UniqueName: \"kubernetes.io/projected/fce16dbc-70a1-43e4-b47e-a5746ac5dc68-kube-api-access-jp8xd\") pod \"fce16dbc-70a1-43e4-b47e-a5746ac5dc68\" (UID: \"fce16dbc-70a1-43e4-b47e-a5746ac5dc68\") " Jan 31 05:18:07 crc kubenswrapper[4827]: I0131 05:18:07.879385 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fce16dbc-70a1-43e4-b47e-a5746ac5dc68-utilities" (OuterVolumeSpecName: "utilities") pod "fce16dbc-70a1-43e4-b47e-a5746ac5dc68" (UID: "fce16dbc-70a1-43e4-b47e-a5746ac5dc68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:18:07 crc kubenswrapper[4827]: I0131 05:18:07.889653 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fce16dbc-70a1-43e4-b47e-a5746ac5dc68-kube-api-access-jp8xd" (OuterVolumeSpecName: "kube-api-access-jp8xd") pod "fce16dbc-70a1-43e4-b47e-a5746ac5dc68" (UID: "fce16dbc-70a1-43e4-b47e-a5746ac5dc68"). InnerVolumeSpecName "kube-api-access-jp8xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:18:07 crc kubenswrapper[4827]: I0131 05:18:07.912845 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fce16dbc-70a1-43e4-b47e-a5746ac5dc68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fce16dbc-70a1-43e4-b47e-a5746ac5dc68" (UID: "fce16dbc-70a1-43e4-b47e-a5746ac5dc68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:18:07 crc kubenswrapper[4827]: I0131 05:18:07.979627 4827 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fce16dbc-70a1-43e4-b47e-a5746ac5dc68-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:18:07 crc kubenswrapper[4827]: I0131 05:18:07.979689 4827 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fce16dbc-70a1-43e4-b47e-a5746ac5dc68-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:18:07 crc kubenswrapper[4827]: I0131 05:18:07.979710 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp8xd\" (UniqueName: \"kubernetes.io/projected/fce16dbc-70a1-43e4-b47e-a5746ac5dc68-kube-api-access-jp8xd\") on node \"crc\" DevicePath \"\"" Jan 31 05:18:08 crc kubenswrapper[4827]: I0131 05:18:08.338939 4827 generic.go:334] "Generic (PLEG): container finished" podID="fce16dbc-70a1-43e4-b47e-a5746ac5dc68" containerID="eab50d808ba5d7e5f3393e490856b3235017e6bbca4bd6f3d050b9e0f0b5e01b" exitCode=0 Jan 31 05:18:08 crc kubenswrapper[4827]: I0131 05:18:08.339050 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fb4dn" event={"ID":"fce16dbc-70a1-43e4-b47e-a5746ac5dc68","Type":"ContainerDied","Data":"eab50d808ba5d7e5f3393e490856b3235017e6bbca4bd6f3d050b9e0f0b5e01b"} Jan 31 05:18:08 crc kubenswrapper[4827]: I0131 05:18:08.339137 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fb4dn" event={"ID":"fce16dbc-70a1-43e4-b47e-a5746ac5dc68","Type":"ContainerDied","Data":"24d142a290d201d1ddce8d62996745d4c859035fbd9a63087c4ef714eb73c1b5"} Jan 31 05:18:08 crc kubenswrapper[4827]: I0131 05:18:08.339216 4827 scope.go:117] "RemoveContainer" containerID="eab50d808ba5d7e5f3393e490856b3235017e6bbca4bd6f3d050b9e0f0b5e01b" Jan 31 05:18:08 crc kubenswrapper[4827]: I0131 05:18:08.339631 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fb4dn" Jan 31 05:18:08 crc kubenswrapper[4827]: I0131 05:18:08.375289 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fb4dn"] Jan 31 05:18:08 crc kubenswrapper[4827]: I0131 05:18:08.386569 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fb4dn"] Jan 31 05:18:08 crc kubenswrapper[4827]: I0131 05:18:08.392421 4827 scope.go:117] "RemoveContainer" containerID="399cdf6624359ab6afcef1415b1b1511ff05854502e1dc73aac70988b31b28bd" Jan 31 05:18:08 crc kubenswrapper[4827]: I0131 05:18:08.424768 4827 scope.go:117] "RemoveContainer" containerID="5b9114f8ab556da614eaef6a6535ee5023eeb6fe7ae605c3032fd71a46a6dc1b" Jan 31 05:18:08 crc kubenswrapper[4827]: I0131 05:18:08.466471 4827 scope.go:117] "RemoveContainer" containerID="eab50d808ba5d7e5f3393e490856b3235017e6bbca4bd6f3d050b9e0f0b5e01b" Jan 31 05:18:08 crc kubenswrapper[4827]: E0131 05:18:08.467242 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eab50d808ba5d7e5f3393e490856b3235017e6bbca4bd6f3d050b9e0f0b5e01b\": container with ID starting with eab50d808ba5d7e5f3393e490856b3235017e6bbca4bd6f3d050b9e0f0b5e01b not found: ID does not exist" containerID="eab50d808ba5d7e5f3393e490856b3235017e6bbca4bd6f3d050b9e0f0b5e01b" Jan 31 05:18:08 crc kubenswrapper[4827]: I0131 05:18:08.467271 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eab50d808ba5d7e5f3393e490856b3235017e6bbca4bd6f3d050b9e0f0b5e01b"} err="failed to get container status \"eab50d808ba5d7e5f3393e490856b3235017e6bbca4bd6f3d050b9e0f0b5e01b\": rpc error: code = NotFound desc = could not find container \"eab50d808ba5d7e5f3393e490856b3235017e6bbca4bd6f3d050b9e0f0b5e01b\": container with ID starting with eab50d808ba5d7e5f3393e490856b3235017e6bbca4bd6f3d050b9e0f0b5e01b not found: ID does not exist" Jan 31 05:18:08 crc kubenswrapper[4827]: I0131 05:18:08.467293 4827 scope.go:117] "RemoveContainer" containerID="399cdf6624359ab6afcef1415b1b1511ff05854502e1dc73aac70988b31b28bd" Jan 31 05:18:08 crc kubenswrapper[4827]: E0131 05:18:08.467925 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"399cdf6624359ab6afcef1415b1b1511ff05854502e1dc73aac70988b31b28bd\": container with ID starting with 399cdf6624359ab6afcef1415b1b1511ff05854502e1dc73aac70988b31b28bd not found: ID does not exist" containerID="399cdf6624359ab6afcef1415b1b1511ff05854502e1dc73aac70988b31b28bd" Jan 31 05:18:08 crc kubenswrapper[4827]: I0131 05:18:08.468047 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"399cdf6624359ab6afcef1415b1b1511ff05854502e1dc73aac70988b31b28bd"} err="failed to get container status \"399cdf6624359ab6afcef1415b1b1511ff05854502e1dc73aac70988b31b28bd\": rpc error: code = NotFound desc = could not find container \"399cdf6624359ab6afcef1415b1b1511ff05854502e1dc73aac70988b31b28bd\": container with ID starting with 399cdf6624359ab6afcef1415b1b1511ff05854502e1dc73aac70988b31b28bd not found: ID does not exist" Jan 31 05:18:08 crc kubenswrapper[4827]: I0131 05:18:08.468338 4827 scope.go:117] "RemoveContainer" containerID="5b9114f8ab556da614eaef6a6535ee5023eeb6fe7ae605c3032fd71a46a6dc1b" Jan 31 05:18:08 crc kubenswrapper[4827]: E0131 05:18:08.468767 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b9114f8ab556da614eaef6a6535ee5023eeb6fe7ae605c3032fd71a46a6dc1b\": container with ID starting with 5b9114f8ab556da614eaef6a6535ee5023eeb6fe7ae605c3032fd71a46a6dc1b not found: ID does not exist" containerID="5b9114f8ab556da614eaef6a6535ee5023eeb6fe7ae605c3032fd71a46a6dc1b" Jan 31 05:18:08 crc kubenswrapper[4827]: I0131 05:18:08.468793 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b9114f8ab556da614eaef6a6535ee5023eeb6fe7ae605c3032fd71a46a6dc1b"} err="failed to get container status \"5b9114f8ab556da614eaef6a6535ee5023eeb6fe7ae605c3032fd71a46a6dc1b\": rpc error: code = NotFound desc = could not find container \"5b9114f8ab556da614eaef6a6535ee5023eeb6fe7ae605c3032fd71a46a6dc1b\": container with ID starting with 5b9114f8ab556da614eaef6a6535ee5023eeb6fe7ae605c3032fd71a46a6dc1b not found: ID does not exist" Jan 31 05:18:10 crc kubenswrapper[4827]: I0131 05:18:10.124241 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fce16dbc-70a1-43e4-b47e-a5746ac5dc68" path="/var/lib/kubelet/pods/fce16dbc-70a1-43e4-b47e-a5746ac5dc68/volumes" Jan 31 05:18:17 crc kubenswrapper[4827]: I0131 05:18:17.371601 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:18:17 crc kubenswrapper[4827]: I0131 05:18:17.372181 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:18:32 crc kubenswrapper[4827]: E0131 05:18:32.619460 4827 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.80:34260->38.102.83.80:42075: write tcp 38.102.83.80:34260->38.102.83.80:42075: write: broken pipe Jan 31 05:18:47 crc kubenswrapper[4827]: I0131 05:18:47.371871 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:18:47 crc kubenswrapper[4827]: I0131 05:18:47.372468 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:18:47 crc kubenswrapper[4827]: I0131 05:18:47.372520 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 05:18:47 crc kubenswrapper[4827]: I0131 05:18:47.373152 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b941b80c4a232140971b6ffe079326d4736817375d3ac0d921c89321be0c73e2"} pod="openshift-machine-config-operator/machine-config-daemon-jxh94" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:18:47 crc kubenswrapper[4827]: I0131 05:18:47.373221 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" containerID="cri-o://b941b80c4a232140971b6ffe079326d4736817375d3ac0d921c89321be0c73e2" gracePeriod=600 Jan 31 05:18:47 crc kubenswrapper[4827]: I0131 05:18:47.739965 4827 generic.go:334] "Generic (PLEG): container finished" podID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerID="b941b80c4a232140971b6ffe079326d4736817375d3ac0d921c89321be0c73e2" exitCode=0 Jan 31 05:18:47 crc kubenswrapper[4827]: I0131 05:18:47.740321 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerDied","Data":"b941b80c4a232140971b6ffe079326d4736817375d3ac0d921c89321be0c73e2"} Jan 31 05:18:47 crc kubenswrapper[4827]: I0131 05:18:47.740354 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerStarted","Data":"41556307a89f593d7d7f16b9631c1ae374c680f7cbf9da8244161e1a9f876155"} Jan 31 05:18:47 crc kubenswrapper[4827]: I0131 05:18:47.740372 4827 scope.go:117] "RemoveContainer" containerID="c9ced493f5e84fdd44e10692ae1dd45481bb553a477cb0b738b8b94a7b5f3f82" Jan 31 05:19:59 crc kubenswrapper[4827]: I0131 05:19:59.529011 4827 generic.go:334] "Generic (PLEG): container finished" podID="95a35264-a5f4-4eca-930e-5d5504ce5b2a" containerID="4d1f820821e4999d48e6946495ec2fb63be2456e662e61367a4003c165fdbe02" exitCode=0 Jan 31 05:19:59 crc kubenswrapper[4827]: I0131 05:19:59.529067 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr99w/must-gather-vf58c" event={"ID":"95a35264-a5f4-4eca-930e-5d5504ce5b2a","Type":"ContainerDied","Data":"4d1f820821e4999d48e6946495ec2fb63be2456e662e61367a4003c165fdbe02"} Jan 31 05:19:59 crc kubenswrapper[4827]: I0131 05:19:59.530046 4827 scope.go:117] "RemoveContainer" containerID="4d1f820821e4999d48e6946495ec2fb63be2456e662e61367a4003c165fdbe02" Jan 31 05:19:59 crc kubenswrapper[4827]: I0131 05:19:59.880070 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gr99w_must-gather-vf58c_95a35264-a5f4-4eca-930e-5d5504ce5b2a/gather/0.log" Jan 31 05:20:05 crc kubenswrapper[4827]: E0131 05:20:05.858912 4827 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.80:49920->38.102.83.80:42075: write tcp 38.102.83.80:49920->38.102.83.80:42075: write: broken pipe Jan 31 05:20:11 crc kubenswrapper[4827]: I0131 05:20:11.124376 4827 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gr99w/must-gather-vf58c"] Jan 31 05:20:11 crc kubenswrapper[4827]: I0131 05:20:11.125255 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-gr99w/must-gather-vf58c" podUID="95a35264-a5f4-4eca-930e-5d5504ce5b2a" containerName="copy" containerID="cri-o://5fd8865039c619c8e7c340ea6f1d025c299eff428cb52570427ca1244fe75593" gracePeriod=2 Jan 31 05:20:11 crc kubenswrapper[4827]: I0131 05:20:11.144983 4827 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gr99w/must-gather-vf58c"] Jan 31 05:20:11 crc kubenswrapper[4827]: I0131 05:20:11.576109 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gr99w_must-gather-vf58c_95a35264-a5f4-4eca-930e-5d5504ce5b2a/copy/0.log" Jan 31 05:20:11 crc kubenswrapper[4827]: I0131 05:20:11.576955 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr99w/must-gather-vf58c" Jan 31 05:20:11 crc kubenswrapper[4827]: I0131 05:20:11.623896 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2p9k\" (UniqueName: \"kubernetes.io/projected/95a35264-a5f4-4eca-930e-5d5504ce5b2a-kube-api-access-z2p9k\") pod \"95a35264-a5f4-4eca-930e-5d5504ce5b2a\" (UID: \"95a35264-a5f4-4eca-930e-5d5504ce5b2a\") " Jan 31 05:20:11 crc kubenswrapper[4827]: I0131 05:20:11.624089 4827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/95a35264-a5f4-4eca-930e-5d5504ce5b2a-must-gather-output\") pod \"95a35264-a5f4-4eca-930e-5d5504ce5b2a\" (UID: \"95a35264-a5f4-4eca-930e-5d5504ce5b2a\") " Jan 31 05:20:11 crc kubenswrapper[4827]: I0131 05:20:11.630356 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a35264-a5f4-4eca-930e-5d5504ce5b2a-kube-api-access-z2p9k" (OuterVolumeSpecName: "kube-api-access-z2p9k") pod "95a35264-a5f4-4eca-930e-5d5504ce5b2a" (UID: "95a35264-a5f4-4eca-930e-5d5504ce5b2a"). InnerVolumeSpecName "kube-api-access-z2p9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:20:11 crc kubenswrapper[4827]: I0131 05:20:11.637853 4827 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gr99w_must-gather-vf58c_95a35264-a5f4-4eca-930e-5d5504ce5b2a/copy/0.log" Jan 31 05:20:11 crc kubenswrapper[4827]: I0131 05:20:11.638361 4827 generic.go:334] "Generic (PLEG): container finished" podID="95a35264-a5f4-4eca-930e-5d5504ce5b2a" containerID="5fd8865039c619c8e7c340ea6f1d025c299eff428cb52570427ca1244fe75593" exitCode=143 Jan 31 05:20:11 crc kubenswrapper[4827]: I0131 05:20:11.638415 4827 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr99w/must-gather-vf58c" Jan 31 05:20:11 crc kubenswrapper[4827]: I0131 05:20:11.638424 4827 scope.go:117] "RemoveContainer" containerID="5fd8865039c619c8e7c340ea6f1d025c299eff428cb52570427ca1244fe75593" Jan 31 05:20:11 crc kubenswrapper[4827]: I0131 05:20:11.681335 4827 scope.go:117] "RemoveContainer" containerID="4d1f820821e4999d48e6946495ec2fb63be2456e662e61367a4003c165fdbe02" Jan 31 05:20:11 crc kubenswrapper[4827]: I0131 05:20:11.728414 4827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2p9k\" (UniqueName: \"kubernetes.io/projected/95a35264-a5f4-4eca-930e-5d5504ce5b2a-kube-api-access-z2p9k\") on node \"crc\" DevicePath \"\"" Jan 31 05:20:11 crc kubenswrapper[4827]: I0131 05:20:11.777474 4827 scope.go:117] "RemoveContainer" containerID="5fd8865039c619c8e7c340ea6f1d025c299eff428cb52570427ca1244fe75593" Jan 31 05:20:11 crc kubenswrapper[4827]: E0131 05:20:11.777953 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fd8865039c619c8e7c340ea6f1d025c299eff428cb52570427ca1244fe75593\": container with ID starting with 5fd8865039c619c8e7c340ea6f1d025c299eff428cb52570427ca1244fe75593 not found: ID does not exist" containerID="5fd8865039c619c8e7c340ea6f1d025c299eff428cb52570427ca1244fe75593" Jan 31 05:20:11 crc kubenswrapper[4827]: I0131 05:20:11.778028 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fd8865039c619c8e7c340ea6f1d025c299eff428cb52570427ca1244fe75593"} err="failed to get container status \"5fd8865039c619c8e7c340ea6f1d025c299eff428cb52570427ca1244fe75593\": rpc error: code = NotFound desc = could not find container \"5fd8865039c619c8e7c340ea6f1d025c299eff428cb52570427ca1244fe75593\": container with ID starting with 5fd8865039c619c8e7c340ea6f1d025c299eff428cb52570427ca1244fe75593 not found: ID does not exist" Jan 31 05:20:11 crc kubenswrapper[4827]: I0131 05:20:11.778062 4827 scope.go:117] "RemoveContainer" containerID="4d1f820821e4999d48e6946495ec2fb63be2456e662e61367a4003c165fdbe02" Jan 31 05:20:11 crc kubenswrapper[4827]: E0131 05:20:11.778419 4827 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d1f820821e4999d48e6946495ec2fb63be2456e662e61367a4003c165fdbe02\": container with ID starting with 4d1f820821e4999d48e6946495ec2fb63be2456e662e61367a4003c165fdbe02 not found: ID does not exist" containerID="4d1f820821e4999d48e6946495ec2fb63be2456e662e61367a4003c165fdbe02" Jan 31 05:20:11 crc kubenswrapper[4827]: I0131 05:20:11.778473 4827 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d1f820821e4999d48e6946495ec2fb63be2456e662e61367a4003c165fdbe02"} err="failed to get container status \"4d1f820821e4999d48e6946495ec2fb63be2456e662e61367a4003c165fdbe02\": rpc error: code = NotFound desc = could not find container \"4d1f820821e4999d48e6946495ec2fb63be2456e662e61367a4003c165fdbe02\": container with ID starting with 4d1f820821e4999d48e6946495ec2fb63be2456e662e61367a4003c165fdbe02 not found: ID does not exist" Jan 31 05:20:11 crc kubenswrapper[4827]: I0131 05:20:11.828135 4827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95a35264-a5f4-4eca-930e-5d5504ce5b2a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "95a35264-a5f4-4eca-930e-5d5504ce5b2a" (UID: "95a35264-a5f4-4eca-930e-5d5504ce5b2a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:20:11 crc kubenswrapper[4827]: I0131 05:20:11.829555 4827 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/95a35264-a5f4-4eca-930e-5d5504ce5b2a-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 05:20:12 crc kubenswrapper[4827]: I0131 05:20:12.119679 4827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95a35264-a5f4-4eca-930e-5d5504ce5b2a" path="/var/lib/kubelet/pods/95a35264-a5f4-4eca-930e-5d5504ce5b2a/volumes" Jan 31 05:20:47 crc kubenswrapper[4827]: I0131 05:20:47.371764 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:20:47 crc kubenswrapper[4827]: I0131 05:20:47.373049 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:21:04 crc kubenswrapper[4827]: I0131 05:21:04.482535 4827 scope.go:117] "RemoveContainer" containerID="fb6de427c7a29e92fe440b6dfa0e57a186b2dbcd19a430b650c51849dcc4ff14" Jan 31 05:21:17 crc kubenswrapper[4827]: I0131 05:21:17.371561 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:21:17 crc kubenswrapper[4827]: I0131 05:21:17.372131 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:21:47 crc kubenswrapper[4827]: I0131 05:21:47.371557 4827 patch_prober.go:28] interesting pod/machine-config-daemon-jxh94 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:21:47 crc kubenswrapper[4827]: I0131 05:21:47.372265 4827 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:21:47 crc kubenswrapper[4827]: I0131 05:21:47.372354 4827 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" Jan 31 05:21:47 crc kubenswrapper[4827]: I0131 05:21:47.373229 4827 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"41556307a89f593d7d7f16b9631c1ae374c680f7cbf9da8244161e1a9f876155"} pod="openshift-machine-config-operator/machine-config-daemon-jxh94" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:21:47 crc kubenswrapper[4827]: I0131 05:21:47.373312 4827 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerName="machine-config-daemon" containerID="cri-o://41556307a89f593d7d7f16b9631c1ae374c680f7cbf9da8244161e1a9f876155" gracePeriod=600 Jan 31 05:21:47 crc kubenswrapper[4827]: E0131 05:21:47.508464 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:21:47 crc kubenswrapper[4827]: I0131 05:21:47.645606 4827 generic.go:334] "Generic (PLEG): container finished" podID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" containerID="41556307a89f593d7d7f16b9631c1ae374c680f7cbf9da8244161e1a9f876155" exitCode=0 Jan 31 05:21:47 crc kubenswrapper[4827]: I0131 05:21:47.645647 4827 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" event={"ID":"e63dbb73-e1a2-4796-83c5-2a88e55566b5","Type":"ContainerDied","Data":"41556307a89f593d7d7f16b9631c1ae374c680f7cbf9da8244161e1a9f876155"} Jan 31 05:21:47 crc kubenswrapper[4827]: I0131 05:21:47.645680 4827 scope.go:117] "RemoveContainer" containerID="b941b80c4a232140971b6ffe079326d4736817375d3ac0d921c89321be0c73e2" Jan 31 05:21:47 crc kubenswrapper[4827]: I0131 05:21:47.646322 4827 scope.go:117] "RemoveContainer" containerID="41556307a89f593d7d7f16b9631c1ae374c680f7cbf9da8244161e1a9f876155" Jan 31 05:21:47 crc kubenswrapper[4827]: E0131 05:21:47.646646 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:22:00 crc kubenswrapper[4827]: I0131 05:22:00.110410 4827 scope.go:117] "RemoveContainer" containerID="41556307a89f593d7d7f16b9631c1ae374c680f7cbf9da8244161e1a9f876155" Jan 31 05:22:00 crc kubenswrapper[4827]: E0131 05:22:00.111857 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:22:13 crc kubenswrapper[4827]: I0131 05:22:13.110484 4827 scope.go:117] "RemoveContainer" containerID="41556307a89f593d7d7f16b9631c1ae374c680f7cbf9da8244161e1a9f876155" Jan 31 05:22:13 crc kubenswrapper[4827]: E0131 05:22:13.111807 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:22:26 crc kubenswrapper[4827]: I0131 05:22:26.110045 4827 scope.go:117] "RemoveContainer" containerID="41556307a89f593d7d7f16b9631c1ae374c680f7cbf9da8244161e1a9f876155" Jan 31 05:22:26 crc kubenswrapper[4827]: E0131 05:22:26.111140 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:22:38 crc kubenswrapper[4827]: I0131 05:22:38.122976 4827 scope.go:117] "RemoveContainer" containerID="41556307a89f593d7d7f16b9631c1ae374c680f7cbf9da8244161e1a9f876155" Jan 31 05:22:38 crc kubenswrapper[4827]: E0131 05:22:38.123873 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:22:49 crc kubenswrapper[4827]: I0131 05:22:49.110320 4827 scope.go:117] "RemoveContainer" containerID="41556307a89f593d7d7f16b9631c1ae374c680f7cbf9da8244161e1a9f876155" Jan 31 05:22:49 crc kubenswrapper[4827]: E0131 05:22:49.111423 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:23:03 crc kubenswrapper[4827]: I0131 05:23:03.110824 4827 scope.go:117] "RemoveContainer" containerID="41556307a89f593d7d7f16b9631c1ae374c680f7cbf9da8244161e1a9f876155" Jan 31 05:23:03 crc kubenswrapper[4827]: E0131 05:23:03.111380 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:23:18 crc kubenswrapper[4827]: I0131 05:23:18.125195 4827 scope.go:117] "RemoveContainer" containerID="41556307a89f593d7d7f16b9631c1ae374c680f7cbf9da8244161e1a9f876155" Jan 31 05:23:18 crc kubenswrapper[4827]: E0131 05:23:18.126145 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:23:30 crc kubenswrapper[4827]: I0131 05:23:30.112743 4827 scope.go:117] "RemoveContainer" containerID="41556307a89f593d7d7f16b9631c1ae374c680f7cbf9da8244161e1a9f876155" Jan 31 05:23:30 crc kubenswrapper[4827]: E0131 05:23:30.114283 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:23:42 crc kubenswrapper[4827]: I0131 05:23:42.110087 4827 scope.go:117] "RemoveContainer" containerID="41556307a89f593d7d7f16b9631c1ae374c680f7cbf9da8244161e1a9f876155" Jan 31 05:23:42 crc kubenswrapper[4827]: E0131 05:23:42.110851 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:23:53 crc kubenswrapper[4827]: I0131 05:23:53.110667 4827 scope.go:117] "RemoveContainer" containerID="41556307a89f593d7d7f16b9631c1ae374c680f7cbf9da8244161e1a9f876155" Jan 31 05:23:53 crc kubenswrapper[4827]: E0131 05:23:53.111954 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:24:04 crc kubenswrapper[4827]: I0131 05:24:04.110034 4827 scope.go:117] "RemoveContainer" containerID="41556307a89f593d7d7f16b9631c1ae374c680f7cbf9da8244161e1a9f876155" Jan 31 05:24:04 crc kubenswrapper[4827]: E0131 05:24:04.111177 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:24:16 crc kubenswrapper[4827]: I0131 05:24:16.111145 4827 scope.go:117] "RemoveContainer" containerID="41556307a89f593d7d7f16b9631c1ae374c680f7cbf9da8244161e1a9f876155" Jan 31 05:24:16 crc kubenswrapper[4827]: E0131 05:24:16.112520 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:24:30 crc kubenswrapper[4827]: I0131 05:24:30.110323 4827 scope.go:117] "RemoveContainer" containerID="41556307a89f593d7d7f16b9631c1ae374c680f7cbf9da8244161e1a9f876155" Jan 31 05:24:30 crc kubenswrapper[4827]: E0131 05:24:30.111451 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5" Jan 31 05:24:41 crc kubenswrapper[4827]: I0131 05:24:41.110139 4827 scope.go:117] "RemoveContainer" containerID="41556307a89f593d7d7f16b9631c1ae374c680f7cbf9da8244161e1a9f876155" Jan 31 05:24:41 crc kubenswrapper[4827]: E0131 05:24:41.110824 4827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jxh94_openshift-machine-config-operator(e63dbb73-e1a2-4796-83c5-2a88e55566b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jxh94" podUID="e63dbb73-e1a2-4796-83c5-2a88e55566b5"